For me personally, it’s just a nice to have for games that require it. I remember pulling out my steam controller a few times when Breath of the Wild needed motion controls.
For me personally, it’s just a nice to have for games that require it. I remember pulling out my steam controller a few times when Breath of the Wild needed motion controls.
I’d be interested in setting up the highest quality models to run locally, and I don’t have the budget for a GPU with anywhere near enough VRAM, but my main server PC has a 7900x and I could afford to upgrade its RAM - is it possible, and if so how difficult, to get this stuff running on CPU? Inference speed isn’t a sticking point as long as it’s not unusably slow, but I do have access to an OpenAI subscription so there just wouldn’t be much point with lower quality models except as a toy.
This is a use-after-free, which should be impossible in safe Rust due to the borrow checker. The only way for this to happen would be incorrect unsafe code (still possible, but dramatically reduced code surface to worry about) or a compiler bug. To allocate heap space in safe Rust, you have to use types provided by the language like Box
, Rc
, Vec
, etc. To free that space (in Rust terminology, dropping it by using drop()
or letting it go out of scope) you must be the owner of it and there may be current borrows (i.e. no references may exist). Once the variable is drop
ed, the variable is dead so accessing it is a compiler error, and the compiler/std handles freeing the memory.
There’s some extra semantics to some of that but that’s pretty much it. These kind of memory bugs are basically Rust’s raison d’etre - it’s been carefully designed to make most memory bugs impossible without using unsafe
. If you’d like more information I’d be happy to provide!
You’re looking for Fred Rogers, more commonly Mr. Rogers. He was the host of the popular children’s show Mister Rogers’ Neighborhood, and is revered for having been incredibly compassionate both in public and private.
Even as an (older) zoomer in the US, this was never a thing for me. No one cared what phone you used. If you had an Android you wouldn’t be in iMessage group chats but no one judged you for it.
The issue is that, in the function passed to reduce
, you’re adding each object directly to the accumulator rather than to its intended parent. These are the problem lines:
if (index == array.length - 1) {
accumulator[val] = value;
} else if (!accumulator.hasOwnProperty(val)) {
accumulator[val] = {}; // update the accumulator object
}
There’s no pretty way (that I can think of at least) to do what you want using methods like reduce
in vanilla JS, so I’d suggest using a for loop instead - especially if you’re new to programming. Something along these lines (not written to be actual code, just to give you an idea):
let curr = settings;
const split = url.split("/");
for (let i = 0; i < split.length: i++) {
const val = split[i];
if (i != split.length-1) {
//add a check to see if curr[val] exists
let next = {};
curr[val] = next;
curr = next;
}
//add else branch
}
It’s missing some things, but the important part is there - every time we move one level deeper in the URL, we update curr
so that we keep our place instead of always adding to the top level.
The GPU I used is actually a 1080, with a (rapidly declining in usefulness) Intel 4690k. But I suppose laptop vs desktop can certainly make all the difference. What I really want is GPU virtualization, which I’ve heard AMD supports, but I’m not about to buy a new GPU when what I’ve got works fine.
My experience with single GPU passthrough on Proxmox to a media VM was pretty positive, especially for it being an old Nvidia card. Even as someone doing it for the first time, it just took about 10 minutes to figure out the passthrough itself and another ~15 to figure out some driver issues. And it’s worked perfectly since then. All in all much better than what I’d expected.
Yeah personally I haven’t needed jQuery in years.
Having made the choice to use GTK for a Rust project years ago - before a lot of the more Rust-friendly frameworks were around - this is exactly why I chose it. Nothing to do with DEs or any of that, just looking for a better coding experience. Now I’d probably choose one of the several Rust-focused solutions that have popped up though.
Currying is converting a function with n parameters to n functions that each have one parameter. This is done automatically in most primarily functional languages. Then, partial application is when you supply less than n arguments to a curried function. In short, currying happens at the function definition and partial application happens at the function call.
Currently the type of test_increment
is (int, int) -> unit -> unit
. What we want is int -> int -> unit -> unit
. The more idiomatic way would have this function definition:
let test_increment new_value original_value () =
Which would require this change in the callers:
test_case "blah" `Quick (test_increment 1 0);
See, in most primarily functional languages you don’t put parentheses around function parameters/arguments, nor commas between them - in this case, only around and between members of tuples.
I’m not an OCaml person but I do know other functional languages. I looked into Alcotest and it looks like the function after “`Quick” has to be unit -> unit
. Because OCaml has currying, and I think test_increment
already returns unit
, all you should have to do is add an extra parameter of type unit
. I believe that would be done like this:
let test_increment (new_value, original_value) () =
Now the expression test_increment (1, 0)
returns a function that must be passed a unit
to run its body. That means you can change the lambdas to e.g. this:
test_case "blah" `Quick (test_increment (1, 0))
I don’t know OCaml precedence rules so the enclosing parentheses here may not be necessary.
I’d also note that taking new_value
and original_value
as a tuple would probably be considered not idiomatic unless it makes sense for the structure of the rest of your code, especially because it limits currying like we did with the unit
being able to be passed later. Partial application/currying is a big part of the flexibility of functional languages.
Edit: if you’re getting into functional programming you may also consider calling increment_by_one
“succ” or “successor” which is the typical terminology in functional land.
I think the point is that since it’s open source, there’s not as much worry of Microsoft ruining a good thing.
Excuse me but I have it from a very reputable source that the saying goes "fool me once…
Shame on… Shame on you.
Fool me-- can’t get fooled again"
Eh, I’ll take assembly over like Perl any day. I may be biased since I like to reverse engineer though.
Pretty sure Jython is, though not a Python 3 version. My exposure is as the scripting language, behind Java as the primary language, for Ghidra. And I hate it and installed Eclipse rather than deal with J/Python’s shit.
Typically this thinking is mostly correct - e.g. Manifest v3 - but not in this case. If websites see enough users using chormium, via user agent or other fingerprinting, they’ll be more willing to require WEI. And unlike Manifest v3 etc. this affects the whole web, not just users of one browser or the other.
In every case monopolies are bad. Including in tech.
Sixty-twelfth?