• 0 Posts
  • 67 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • Chobbes@lemmy.worldtoLinux@lemmy.mlThoughts on this?
    link
    fedilink
    arrow-up
    11
    arrow-down
    3
    ·
    9 months ago

    I don’t really have much of an opinion about Wayland but it’s still funny to me whenever somebody using Wayland shits on X11 and then tries to share their screen on Zoom or something. If Wayland ends up being great I’ll be happy, but for now X11 just kind of works, so I don’t understand why people are so eager to switch? This isn’t to say I don’t understand the desire to build something better and more secure than X11, I’m just not sure what the end user gets out of Wayland right now. I don’t have VRR monitors and stuff, though, so maybe I’m not running into problems I would be if I wanted fancier features. Plus, I use xmonad and some other stuff right now that won’t work on Wayland, so I don’t have much incentive to try it. Hopefully everything gets Wayland updates eventually.



  • I can totally understand why the terminal seems confusing and scary right now, but it’s actually awesome for this kind of stuff because you can just copy and paste commands to do pretty much anything to your computer. Using a GUI often means having a bunch of screenshots that you have to follow manually to do something that a single command can do. Once you’re used to the terminal for these kinds of things GUIs can seem barbaric. Of course it seems scary before you know much about it because it seems like the fucking matrix, and you should only run commands from sources you trust (because they can do anything)… But it’s worth giving a chance, I think.

    For this particular instance… often you can just download an application on Linux from a website and run it, but this is almost never the preferred way of doing things. Usually you install applications from your package manager, which is kind of like an App Store (but free), and the advantage of this is that 1) you don’t have to hunt down sketchy executables on the internet, you have a vetted source of safe packages from your distribution, and 2) you can easily update all of your packages. Having a one stop shop for all of your applications (or at least most) is really great, but it can be a little annoying when something you want isn’t in the official repos (like this), though it’s usually a fairly rare occurrence.


  • The abysmal adoption of DNSSEC is just embarrassing, and I haven’t heard any good arguments for why we shouldn’t do it. There’s one blog post that gets passed around as justification for not adopting DNSSEC, but it doesn’t really go into any technical detail and is mostly just the author saying “I’m scared of governments and TLDs”… which is maybe fair, but you still have to trust them for regular CA certs and everything, so why not make thr base secure?

    Honestly, I might care slightly more about DNSSEC than IPv6 adoption… IPv4 exhaustion and NATing everywhere sucks, but the fact that you can’t trust DNS is like… insane.


  • DNS setups can get fairly complicated with enterprise VPNs and stuff, but the main thing is probably just that DNS is built entirely around caching, so when something does go wrong or you’re trying to update something it’s easy for there to be a stale value somewhere. It’s also really fundamental, so when it breaks it can break anything.

    Overall, though, DNS isn’t terribly complex. It’s mostly just a key-value store with some caching. Running your own nameservers is pretty cool and will give you a much better understanding of how it all fits together and scales.


  • But it all happens at compile time. That’s the difference.

    No, when you have a library like SDL you will have functions that wrap lower level libraries for interacting with the screen and devices. At SDL’s compile time you may have preprocessor macros or whatever which select the implementation of these functions based on the platform, but at run time you still have the extra overhead of these SDL function calls when using the library. The definitions won’t be inlined, and there will be extra overhead to provide a consistent higher level interface, as it won’t exactly match the lower level APIs. It doesn’t matter if it’s compiled, there’s still overhead.

    C is just a language, it’s not native. Native means the binary that will execute on hardware is decided at compile time, in other words, it’s not jitted for the platform it’s running on.

    Wine doesn’t really involve any jitting, though, it’s just an implementation of the Windows APIs in the Linux userspace… So, arguably it’s as native as anything else. The main place where JIT will occur is for shader compilation in DXVK, where the results will be cached, and there is still JIT going on on the “native windows” side anyway.

    If you don’t consider C code compiled to native assembly to be native, then this is all moot, and pretty much nothing is native! I agree that C is just a language so it’s not necessarily compiled down to native assembly, but if you don’t consider it native code when it is… Then what does it mean to be native?

    the binary that will execute on hardware is decided at compile time

    This is true for interpreted languages. The interpreter is a fixed binary that executes on hardware, and you can even bake in the program being interpreted into an executable! You could argue that control flow is determined dynamically by data stored in memory, so maybe that’s what makes it “non-native”, but this is technically true for any natively compiled binary program too :). There’s a sense in which every program that manipulates data is really just an interpreter, so why consider one to be native and not the other? Even native assembly code isn’t really what’s running on the processor due to things like microcode, and arguably speculative execution is a fancy kind of JIT that happens in hardware which essentially dynamically performs optimizations like loop unrolling… It’s more of a grey area than you might think, and nailing down a precise mathematical definition of “native code” is tricky!

    I assume you’re not talking about a compiler that generates C code here, right? If it’s outputting C, then no, it’s not native code yet.

    But it will be native code :). Pretty much all compilers go through several translation steps between intermediate languages, and it’s not uncommon for compilers to use C as an intermediate language, Vala does this for instance, and even compilers for languages like Haskell have done this in the past. C is a less common target these days, as many compiler front ends will spit out LLVM instead, but it’s still around. Plus, there’s often more restricted C-like languages in the middle. Haskell’s GHC still uses Cmm which is a C-like language for compilation, for example.

    Well first off, games don’t ship with their HLSL (unlike OGL where older games DID have to ship with GLSL), they ship with DXBC/DXIL, which is the DX analog to spir-v (or, more accurately, vice versa).

    Sure, and arguably it’s a little different to ship a lower level representation, but there will still be a compilation step for this, so you’re arguably not really introducing a new compilation step anyway, just a different one for a different backend. If you consider a binary that you get from a C compiler to be native code, why shouldn’t we consider this to be native code :)? It might not be as optimized as it could have been otherwise, but there’s plenty of native programs where that’s the case anyway, so why consider this to be any different?

    Ultimately the native vs. non-native distinction doesn’t really matter, and arguably this distinction doesn’t even really exist — it’s not really easy to settle on a formal definition for this distinction that’s satisfying. The only thing that matters is performance, and people often use these things such as “it’s a compiled language” and “it has to go through fewer translation layers / layers of indirection” as a rule of thumb to guess whether something is less efficient than it could be, but it doesn’t always hold up and it doesn’t always matter. Arguably this is a case where it doesn’t really matter. There’s some overhead with wine and DXVK, but it clearly performs really well (and supposedly better in some cases), and it’s hard to truly compare because the platforms are so different in the first place, so maybe it’s all close enough anyway :).

    Also to be clear, it’s not that I don’t see your points, and in a sense you’re correct! But I don’t believe these distinctions are as mathematically precise as you do, which is my main point :). Anyway, I hope you have a happy holidays!


  • You’re explaining yourself fine, I just don’t necessarily agree with the distinction. It’s like when people say a language is “a compiled language” when that doesn’t really have much to do with the language, it’s more of an implementation detail. It’s a mostly arbitrary distinction that makes sense to talk about sometimes in practice, but it’s not necessarily meaningful philosophically.

    That said, SDL isn’t really any different. It’s not translating languages, but you still have additional function calls and overhead wrapping lower level libraries, just the same as wine. DXVK has an additional problem where shaders will have to be converted to SPIR-V or something which arguable makes it “more non-native” but I think that’s not as obvious of a distinction to make too. You probably wouldn’t wouldn’t consider C code non-native, even though it’s translated to several different languages before you get native code, and usually you consider compilers that use C as a backend to be native code compilers too, so why would you consider HLSL -> SPIR-V to be any different? There’s reasons why you might make these distinctions, but my point is just that it’s more arbitrary than you might think.








  • This is what I thought too, but in my case it turned out my drive was busted and btrfs detected an error and went read only… which was super annoying and my initial reaction was “ugh, piece of shit filesystem!” But ultimately I’m grateful it noticed something was wrong with the drive. If I was just using ext4 I just would have had silent data corruption. In that sense other filesystems do silently do their jobs… but they also potentially fail silently which is a little scary. Checksums are nice.



  • I really do recommend doing a Gentoo install at some point, because I think you would learn a lot from it. It’s a really nice experience and a well put together distro. The compiling is potentially not as bad as you think, but there are a couple of packages that are notoriously painful to compile (there are prebuilt binaries available for some of the painful ones if desired too). You’d probably get a decent amount out of an Arch install too. Arch isn’t my cup of tea, but lots of people like it and it’d be quicker to get started than Gentoo. I’m not sure I’d recommend it for you at this stage but eventually you should check out NixOS too! You can even try the package manager out on any distro you want. NixOS is really interesting, but it does things a bit different from other distros, and if you’ve done an Arch / Gentoo install it’ll be interesting to see what NixOS does in contrast.

    Other things to mess with… You mention partitioning, so make sure to check out LVM, and also consider reading a bit about filesystems. Maybe give btrfs a go :).

    I wouldn’t worry about daily driving either Gentoo or Arch. Once you have them set up you’ll probably be fine.



  • I think even if you’re tech-savvy you can have issues with Arch tbh. I don’t think the distro is without merit — a minimal rolling release binary distribution is clearly something people want… But I’m not sure Arch does a great job of being that (for me, at least), and I’ve personally found pacman and the official packages to be kind of lacking (keyring update issue that they’ve maybe finally fixed, installing specific versions of packages / pinning specific versions / downgrading packages are either not supported or not well supported, immediately removing kernel modules on upgrade, even if the currently running kernel may need them, etc…). It just doesn’t feel very polished in my experience and for my use cases (clearly it works for some people!), and that’s what has driven me away from Arch personally. I think a lot of this stems from Arch’s philosophy of being aggressively minimal, which is maybe fair enough… but I don’t think it’s for everybody.


  • Chobbes@lemmy.worldtoLinux@lemmy.mlJust install EndeavorOS lol
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    11 months ago

    Who says it hasn’t happened? :P

    If it hasn’t I would just assume that Slackware isn’t a big enough target and that anybody in the position to man-in-the-middle a large number of people would have better targets. I mean, to be clear TLS is not a silver bullet either, but it goes a long way for ensuring the integrity of the data you receive over the internet in addition to hiding the contents.

    Distros usually sign their ISOs with PGP as well (Slackware does this), so it’s a good idea to verify those signatures as it’s a second channel that you can use to double check the validity of the ISO (but I’m not sure many people actually do this). Of course, anybody can make PGP keys so you have to find out which key is actually supposed to be signing the iso, otherwise an attacker can just make a bogus key and tell you that that’s the Slackware signing key (on the official website too, because it doesn’t use tls!). The web of trust arguably helps some (though this can be faked as well unless you actually participate in key signing parties or something), and you can hope that the Slackware public key is mirrored in several places that you trust so you can compare them… but at the end of the day for most people all trust in the distribution comes from the domain name, and if you don’t have TLS certificates you’re kind of setting up a weak foundation of trust… Maybe it will be fine because you’re not a big enough target for somebody to bother, but in this day and age it’s pretty much trivial to set up TLS certificates and that gets you a far better foundation… why take the risk? Why is it smart to unnecessarily expose your users to more risk than necessary?