Labor of Love award is specifically for older games that are still seeing love from the devs. I’d argue with them releasing a DLC of such quality that many people wondered if a DLC could win game of the year it deserves the nomination too.
Labor of Love award is specifically for older games that are still seeing love from the devs. I’d argue with them releasing a DLC of such quality that many people wondered if a DLC could win game of the year it deserves the nomination too.
Re LLM summaries: I’ve noticed that too. For some of my classes shortly after the ChatGPT boom we were allowed to bring along summaries. I tried to feed it input text and told it to break it down into a sentence or two. Often it would just give a short summary about that topic but not actually use the concepts described in the original text.
Also minor nitpick but be wary of the term “accuracy”. It is a terrible metric for most use cases and when a company advertises their AI having a high accuracy they’re likely hiding something. For example, let’s say we wanted to develop a model that can detect cancer on medical images. If our test set consists of 1% cancer inages and 99% normal tissue the 99% accuracy is achieved trivially easy by a model just predicting “no cancer” every time. A lot of the more interesting problems have class imbalances far worse than this one too.
AI can be good but I’d argue letting an LLM autonomously write a paper is not one of the ways. The risk of it writing factually wrong things is just too great.
To give you an example from astronomy: AI can help filter out “uninteresting” data, which encompasses a large majority of data coming in. It can also help by removing noise from imaging and by drastically speeding up lengthy physical simulations, at the cost of some accuracy.
None of those use cases use LLMs though.
Sorta. The function height(angle) needs to be continuous. From there it’s pretty clear why it works if you know the mean value theorem.
I haven’t looked into this game beyond your description, but it does sound like a pretty weird model. Do you also have to pay for cards on top of that?
It’s not a card game, it’s an async autobattler. As long as all the characters are roughly balanced against each other, there’s nothing to be gained other than cosmetics (at the current state of the game).
It’s only from spells and only the player itself is immune from them. I don’t think this would even see play in YGO.
From what I remember and what a quick search on the internet confirmed, B didn’t actually deny her anything. He actually went out of his way to do as much good for her as he could. He claims to have replied “Language.” because he knew other people at NASA with more say on her job would find her, which would get her into trouble (and they did find her even before his first Tweet).
I’m guessing they just take the correct prefix (the first 3 letters of the correct month) and append “tember”, no matter the month.
Sure. You have to solve it from inside out:
The huge coincidental part is that ඞ lies at a position that can be reached by a cumulative sum of integers between 0 and a given integer. From there on it’s only a question of finding a way to feed that integer into chr(sum(range(x)))
From experience with the beta and memory, your wife (and you) will be able to choose which version to play. Either yours with a ton of DLC or hers with none. You should both be able to use the version with all DLC, but not at the same time.
It’s been a while since we tested this though so things might have changed, including my memory…
If you wanna see a language model (almost) exclusively trained on 4chan, here you go.
Presumably. Wouldn’t take much to fake that though.
after leaving can’t join another for a year
Can you fix this? There was enough misinformation floating around about this already when this feature went into beta.
Adults can leave a family at any time, however, they will need to wait 1 year from when they joined the previous family to create or join a new family.
it should say something like: “After joining, can’t join another for a year”
Assuming we shrink all spacial dimensions equally: With Z, the diagonal will also shrink so that the two horizontal lines would be closer together and then you could not fit them into the original horizontal lines anymore. Only once you shrink the Z far enough that it would fit within the line-width could you fit it into itself again. X I and L all work at any arbitrary amount of shrinking though.
So is the example with the dogs/wolves and the example in the OP.
As to how hard to resolve, the dog/wolves one might be quite difficult, but for the example in the OP, it wouldn’t be hard to feed in all images (during training) with randomly chosen backgrounds to remove the model’s ability to draw any conclusions based on background.
However this would probably unearth the next issue. The one where the human graders, who were probably used to create the original training dataset, have their own biases based on race, gender, appearance, etc. This doesn’t even necessarily mean that they were racist/sexist/etc, just that they struggle to detect certain emotions in certain groups of people. The model would then replicate those issues.
Eh, nothing I did was “figuring out which loophole [they] use”. I’d think most people in this thread talking about the mathematics that could make it a true statement are fully aware that the companies are not using any loophole and just say “above average” to save face. It’s simply a nice brain teaser to some people (myself included) to figure out under which circumstances the statement could be always true.
Also if you wanna be really pedantic, the math is not about the companies, but a debunking of the original Tweet which confidently yet incorrectly says that this statement couldn’t be always true.
Same. I had PayPal do an automated charge back because their system thought I was doing something fraudulent when I wasn’t. Steam blocked my account.
Talking to support and re-buying said game did fix the issue for me.
It’s even simpler. A strictly increasing series will always have element n be higher than the average between any element<n and element n.
Or in other words, if the number of calls is increasing every day, it will always be above average no matter the window used. If you use slightly larger windows you can even have some local decreases and have it still be true, as long as the overall trend is increasing (which you’ve demonstrated the extreme case of).
It is dead AND alive before you check and collapses into dead XOR alive when you check.
But yes, the short description also irked me a little. It’s really hard to write it concisely without leaving out important bits (like we both did too).