• 2 Posts
  • 334 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle


  • For the keys - do you mean something like

    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 00000000 where 00000000 is replaced with the fingerprint of the key you want to fetch?

    I do agree - the apt-key command is kinda dangerous because it imports keys that will be generally trusted, IIRC. So a similar command to fetch a key by fingerprint for it to be available to choose as signing keys for repositories that we configure for a single application (suite) would be nice.

    I always disliked that signing keys are available for download from the same websites that have the repository. What’s the point in that? If someone can inject malicious code in the repository, they sure as hell can generate a matching signing key & sign the code with that.

    Hence I always verify signing keys / fingerprints against somewhat trustworthy third parties.

    What we really need though is a crowdsourced, reputation-based code review system. Where open source code is stored in git-like versioning history, and has clear documentations for each function what it should and should not do. And a reviewer can pick as little as an individual function and review the code to confirm (or refute) that the function

    1. does exactly what the interface documentation claims it does
    2. does nothing else
    3. performs input validation (range checks etc)
    4. is well-written (in terms of performance)

    Then, your reputation score would increase according to other users concurring with your assessment (or decrease if people disagree), and your reputation can be used as a weighting factor in contributing to the “review thoroughness” of a code module that you reviewed. E.g.: a user with a reputation of 0.5 confirms that a module does exactly what it claims to do: Module gets review count +1, module gets new total score of +0.5, new total weight of ( combined previous weights + 0.5 ) and the average review score is “reviews total score” / “total weight”.

    Something like that. And if you have a reputation of “0.9”, the review count goes +1, total score +0.9, total weight +0.9 (so the average score stays between 0 and 1).

    Independent of the user reputation, the user’s review conclusion is stored as “1” (= performs as claimed) or “0” (= does not perform as claimed) for this module.

    Reputation of reviewers could be calculated as the sum of all their individual review scores (at the time the reputation is needed), where the score they get is 1 minus the absolute difference between the average review score of a reviewed module and their own review conclusion.

    E.g. User A concludes: module does what it claims to do: User A assessment is 1 (score for the module) User B concludes: module does NOT what it claims to do: User B assessment is 0 (score)

    Module score is 0.8 (most reviewers agreed that it does what it claims to do)

    User A reputation gained from their review of this module is 1 - abs( 1 - 0.8 ) = 0.8 User B reputation gained from their review of this module is 1 - abs( 0 - 0.8 ) = 0.2

    If both users have previously gained a reputation of 1.0 from 10 reviews (where everyone agreed on the same assessment, thus full scores):

    User A new reputation: ( 1 * 10 + 0.8 ) / 11 = 0.982 User B new reputation: ( 1 * 10 + 0.2 ) / 11 = 0.927

    The basic idea being that all modules in the decentralized review database would have a review count which everyone could filter by, and find the least-reviewed modules (presumably weakest links) to focus their attention on.

    If technically feasible, a decentralized database should prevent any given entity (secret services, botfarms) to falsify the overall review picture too much. I am not sure this can be accomplished - especially with the sophistication of the climate-destroying large language model technology. :/



  • Makefiles/automake isn’t a reasonable expectation these days, with a plethora of languages and build toolchains, but good, clear instructions are definitely something to include.

    As for the Makefiles, I meant that for whatever build toolchain the project uses - because the rules to build a project are an essential part of the project, linking the source code into a working library or executable. Whether it is cmake, or gnu make, or whatever else there is - that’s not so important as long as those build toolchains are available cross platforms.

    I think what is really missing in the open source world is a distribution-agnostic standard how to describe application dependencies so that package maintainers can auto-generate distro-packages with the distribution-specific dependencies based on that “dependencies” file.

    Similar to debian dependencies Depends: libstdc++6 (>= 10.2.1) but in a way that identifies code modules, not packages, so that distributions that package software together differently will still be able to identy findPackageFor( dependency )

    I would really like to add this kind of info to my projects and have a tool that can auto-build a repo-package from those.



  • Edit didn’t mean to imply Linux is easier than Windows to learn in general.

    It is though. People just neglect that in today’s world, no one “learns” Windows from scratch.

    Learning to do anything from scratch is easier on most Linux distros than on Windows. The tools are better and the documentation is light years ahead. Windows is a steaming pile of horseshit in comparison. But once you’ve made yourself a cozy nest in the middle of said pile, getting to the comfy whirlpool hot tub that is linux requires you to scale over the walls of horseshit surrounding your nest. And that is what makes people claim “but Linux hard, muh duh!”








  • Taken from the wikipedia page on rust:

    On February 8, 2021, the formation of the Rust Foundation was announced by its five founding companies (AWS, Huawei, Google, Microsoft, and Mozilla).[36][37] In a blog post published on April 6, 2021, Google announced support for Rust within the Android Open Source Project as an alternative to C/C++.[38]

    Four out of five founding companies are evil to the bone, with only Mozilla being somewhat reputable. That does not give me much confidence, sadly.

    On November 22, 2021, the Moderation Team, which was responsible for enforcing community standards and the Code of Conduct, announced their resignation “in protest of the Core Team placing themselves unaccountable to anyone but themselves[39]”

    How am I not surprised?

    In May 2022, the Rust Core Team, other lead programmers, and certain members of the Rust Foundation board implemented governance reforms in response to the incident.[40]

    At least that. However, I don’t care enough for the time being to spend my morning on reading what exactly they implemented.


  • Thanks for laying out your concerns. As a C++ developer who does not know the other languages you speak of (I assume Rust, Go), I can agree to some of your points, but also some of them I see differently:

    1. C++ can be complex, because it has a lot of features and especially the newer standards have brought some syntax that is hard to understand or read at times. However, those elements are not frequently used, or if they are, the developer will get used to them quickly & they won’t make development slow. As a matter of fact, most development time should be spent on thinking about algorithms, and thinking very well before implementing them - and until implementation, the language does not matter. I do not think that language complexity leads to increased bugs per se. My biggest project is just short of 40k lines of code, and most of the bugs I produced were the classical “off by one” or missing range checks, bugs that you can just as well produce in other languages.

    2. C++ no longer requires you to do manual memory management - that is what smart pointers are for, and RAII-programming.

    3. I can’t make a qualified comment on that, due to lack of expertise - you might be right.

    4. You’re somewhat repeating point 1) here with slow development. But you raise a good point: web standards have become insane in terms of quantity and interface sizes. Everyone and their dog wants to reinvent the wheel. That in itself requires a very large team to support I would say. As stated for point 1), I do not agree development in C++ has to be slower

    5. True, as someone who just suffered from problems introduced on windows (cygwin POSIX message queues implementation got broken by Win10, and inotify does not work on Windows Subsystem for Linux) I can confirm that while the C++ standard library is not much of a problem, the moment you interface with the host OS, you leave the standard realm and it becomes “zombieland”. Also, for some reason, the realtime library implementation on MacOS is different, breaking some very simple time-based functions. So yeah, that’s annoying to circumvent, but can be done by creating platform specific wrapper libraries that create a uniform API. For other languages, it appears this is done by the compilers, which is probably better - meaning the I/O operations got taken into those language’s core features

    6. I am highly doubtful of people relying on garbage collection - a programmer that doesn’t know exactly when his objects come into existence, and when they cease to exist is likely to make much bigger mistakes and produce very inefficient code. The aforementioned smart pointers in C++ solve this issue: object lifetime is the scope of the smart pointer declaration, and for shared pointers, object lifetime expires when the last process using it leaves the scope in which it is declared. For concurrent programming, I do not know if you mean concurrency (threads) or multiple people working on the same project. While multi-threading can be a bit “weird” at first, you have a lot of control over shared variables and memory barriers in C++ that might enable a team to produce a browser that is much faster, which I believe is a core requirement towards modern browsers

    As for your tl;dr: definitely not “less concurrency”, that makes no sense. The other points may or may not be true, keeping in mind the answers I gave above.






  • Scandinavia has always been very left side

    Maybe from a hard neo nazi perspective. Denmark and Sweden especially have right-wing extremist parties (Denmark Democrats + New Right + Danish People’s Party together ~= 14.3%, Sweden Democrats 17.5%) with a voter base that has been established over a longer time. The German right wing populists have risen to that level only in recent elections, which is frightening. Geert Wilders is not “the new guy” from the Netherlands, he’s been a populist rightwing piece of shit for decades. Unfortunately, the average Dutch person over 40 / outside university towns is also quite racist under the surface - I lived there for 4 years, speak fluent Dutch with a German accent and since they felt “safe” with their bigotry around me, I have heard enough racist and sexist bullshit from “average middle class” Dutch people that I didn’t feel comfortable in that country anymore. The young people in urban centres are okay, but unfortunately those are not a large enough demographic.

    As for comparing with the US - maybe not a good idea: Even young US americans see the democrats for the corporate shills they are, and know that they have to vote for them just to prevent a Handmaid’s Tale Season 6 becoming a documentary.

    The US are the scary example for Western Europe as “this will happen here if you don’t pay attention”. No one in Europe will be able to say “I didn’t know” when we slip into a totalitarian regime filled with hate and controlled by corporations, because it might be happening in front of our eyes with a ~10 year headstart in the US. I just hope that’s not what is going to happen in the end, but things have progressed far too much into the worst dystopian future thinkable for this century.