Welcome to Friendica.Eskimo.Com
Home of Censorship Free Hosting

E-mail, Web Hosting, Linux Shell Accounts terminal or full remote desktops.
Sign Up For A Free Trial Here
Please tell your friends about federated social media site that speaks several fediverse protocols thus serving as a hub uniting them, hubzilla.eskimo.com, also check out friendica.eskimo.com, federated macroblogging social media site, mastodon.eskimo.com a federated microblogging site, and yacy.eskimo.com an uncensored federated search engine. All Free!
peertube recomendation algo alpha build
cross-posted from: lemmy.world/post/28808772
Finally released an alpha build for the PeerTube recommendation algorithm!
Basic UI is complete. If you want to try it out, the link is here:
š github.com/solidheron/peertubeā¦New features since the last build:
- Sort by videos that share your time engagement similarity.
- Sort by videos that share your like similarity.
- Display of like similarity cosine values.
- Basic information shown for recommended videos (title, account, and channel names).
- 404 check for generated instance links (so you donāt get stuck clicking into dead videosāyouāll know which instance hosts the video).
- De-ranking for previously seen videos (simply a 0.5x multiplier on time and like similarity).Features from previous builds:
- Ability to input multiple instance domain names (DNs) and generate playable video links.
- Limit of 5 recommendations per channel to avoid floods (e.g., during testing, The Linux Experiment would dominate otherwiseāthis limit is more of a failsafe than a feature).Personal thoughts:
I still think cosine similarity beats chronological algorithms.
This algorithm also synergizes with other algorithmsāit's great for finding videos that appear next to or below what you're currently watching.You can also revisit videos you previously liked to help strengthen your like similarity vectors.
Moving forward: basic design philosophies and current issues
Thereās an issue Iām calling the āLinux pipeline.ā
Basically, Linux-related videos tend to dominate PeerTubeās well-produced content.
Since the algorithm relies on English words in descriptions, titles, and tags, Linux videosāwhich sometimes have fewer general keywordsāend up being more "orthogonal" to typical user vectors, causing lower ranking.Another challenge:
Itās really hard to properly combine like cosine similarity and time engagement cosine similarity.
You could add them, but it doesnāt fully make sense:
- High like similarity + high time engagement similarity = you probably like and will watch the video longer.
- But short videos can be liked even if they contribute almost nothing to time engagement (because time engagement is based on percentage watched Ć video length).If I combined them, it would basically enter machine learning territory:
You'd have to adjust proportions dynamically based on user behavior.
Since I want this algorithm scoped to one person only (no data sharing yet), that level of ML is out of scope for now.(Sharing data across devices could come laterāBrave browser has sync features, and PeerTube watch history syncing could be possible.)
Summary:
Most of the data structure is settling into place.
Future updates will probably focus on expanding the data structure and making small improvements.
GitHub - solidheron/peertube_recomendation_algorythm: currently just a browser extension that monitors your the peertube videos your watch and stores them locally
currently just a browser extension that monitors your the peertube videos your watch and stores them locally - solidheron/peertube_recomendation_algorythmGitHub
like this
don't like this
I think open discovery algorithms are the way. We are against algos but sorting by like similarity would be beneficial.
What are you guys thinking? @dessalines@lemmy.ml @nutomic@lemmy.ml Are you optimistic about this or fuck any algorithms?
like this
"This Linux thing is better than normal computers"
A few years ago my wife and I built a computer out of old parts for her friend's then 10 years old son. Last month we were visiting them, and I heard the wife's friend say something funny that I thought I'd share with you.
They live on the other side of the city, this was the kid's first computer, and his mom doesn't have much computer experience either, so our goal was to build something that was easy to use and hard to break from the beginning. Originally I choose ElementaryOS since it seemed to fit the bill, but after a year or two it turned out that it couldn't be upgraded to a new major version without a full reinstall so it got stuck with an older version. We didn't visit that often, and the kid's games still worked so it wasn't a major issue until Factorio broke due to glibc incompatibility.
When his birthday was coming up last month we bought him a SSD to make the computer a little bit zippier without a major upgrade, and I thought I'd give him a brand new Linux experience too, so I asked for advice here and in the end chose Bazzite. While I was helping the kid with the installation, I overheard his mom saying in the other room:
This Linux thing.. We've never had any problems with it, he just clicks something to install it and it works. Unlike normal computers, where you always have to do things and fix them.
Perhaps not the most eloquent, but I consider it a very good review.
Seeking info on how to update OS from an img file
I guess this would be with Arch, but technically it's steamOS beta
Probably will need console commands, but if there's flatpak software with this ability, that would be sweet.
Any suggestions?
pacman -Syu
to update?
Must install apps/tools
- GIMP (with photogimp patch)
- Steam
- Librewolf (I could also opt for a chromium based browser)
- Tor Browser (to browse onion links/throwaway browser)
- Heroic Games Launcher
- Prism Launcher
- latest Java lts (either from adoptium or openjdk i dont care about flashy new features)
- Libreoffice Still (similar to the second reason above and onlyoffice in appimage due to Libreoffice weird handling with ppsx files and powerpoints)
- QEMU/KVM with virt manager
- Gnome evolution (if it's gtk desktop I could opt for other email clients)
- Proton-GE
- WINE
- Ghostty(Kinda sucks it's based on libadwaita and gnome forces this theme on you no matter your desktop)
- Fish/ZSH(fish not having posix compatibility is kinda annoying)
- MPV (I could still use vlc but I prefer mpv because it can stream youtube links)
- ytp-dl(I can opt for a gui for convenience sake)
- BTOP
- Fastfetch
At the very least:
Yazi
Eza
Kitty
Fish
Fastfetch
Feh
Trash-cli
Micro
Spotify-player
Nmcli
Polybar
Rofi (fuzzel for wayland)
Librewolf
I have an Asus laptop from 2007
Like the title says I want to install a Linux distro on my old laptop. I am currently looking into installing a SSD, but I want to learn a distro for fun! I haven't been able to find a good current resource aside from the Linux Masters here, so I am actually asking for help on the Internet! What distro is the best!?
EDIT: thanks so much everyone for your recommendations and advice! I installed a couple of different systems before deciding that I think the laptop can support Fedora with KDE plasma, and I'm finding it really attractive and easy to use. You will see once I get some more disk space used how the performance holds up! If it runs into trouble I might switch the machine back over to mint with, that one seemed to run really well and was pretty familiar seeming from my Windows days, also seem more low end and booted a little faster. I think I might even end up switching to Linux on my desktop I had so much fun with it last night!! I really appreciate all the information and will probably be experimenting with a more lightweight build on this computer in the future! I'm a Linux user and it was easier than I ever thought! ā¤ļø
I want to learn a Distro for fun.
Are you just using this laptop to dip your toes into Linux and see if you like it? I would recommend Debian + XFCE. It's lightweight, it prioritizes stability over new features, and it's a fairly easy UI for a newbie to understand. Alternatively Linux Mint MATE Edition might be worth a try. It's also lightweight but is a bit more "up to date" than Debian feature wise.
Debian gets feature updates significantly slower than other distros, instead it focuses on insuring stability and security. It's rock solid.
Linux Mint is actually based on Ubuntu (which itself is derived from Debian), so for the most part the two are fairly similar. There are a few key differences but for someone learning Linux you don't need to worry about them. Pick one of them, get your feet wet, and then google the differences to see if you want to switch.
After all, endless Distro hopping is a right of passage for all fledgling Linux users! š
Objectives of learning and fun?
You do not state noobliness, ease of setup or time to install, number of failures/retries or anything like that.
**EDIT: you did state noobliness later on in comments so . . . i'd go stock debian +lxqt. ****
~~or all that I'd recommend arch. Do not use archinstall script , that reduces both learning and fun. Resource? follow the archwiki and go through lots of linked pages at each step. If you do wuss out and install stock debian (+lxqt)~~
maybe partition off a spare 10-20GB so you can play around with an arch install after you realise how boring and uneducational the others are (joke)
Allow traffic only through tun0 via wlan0, ssh, and localhost in and out
Hi all, I'm trying to have my rpi5 running raspberry OS communicate with the Internet only through the tun0 interface (vpn). For this I wanted to create a ufw ruleset. Unfortunately, I've hit a roadblock and I can't figure out where I'm going wrong.
Can you help me discover why this ruleset doesn't allow Internet communication over tun0? When I disable ufw I can access the Internet.
The VPN connection is already established, so it should keep working, right?
I hope you can help me out!
This is the script with the ruleset:
sudo ufw reset
Set default policies
sudo ufw default deny incoming
sudo ufw default deny outgoing
Allow SSH access
sudo ufw allow ssh
Allow local network traffic
sudo ufw allow from 192.168.0.0/16
sudo ufw allow out to 192.168.0.0/16
Allow traffic through VPN tunnel
sudo ufw allow in on tun0
sudo ufw allow out on tun0
Add routing between interfaces (I read its necessary, not sure why?)
sudo ufw route allow in on tun0 out on wlan0
sudo ufw route allow in on wlan0 out on tun0
sudo ufw enable
How I made a blog using Lemmy - a write-up
This is a followup to my introduction of BlogOnLemmy, a simple blog frontend. If you haven't seen it, no need because I will be explaining how it works and how you can run your own BlogOnLemmy for free.
Leveraging the Federation
Having a platform to connect your content to likeminded people is invaluable. The Fediverse achieves this in a platform agnostic way, so in theory it shouldn't matter which platform we use. But platform have different userbases that interact with posts in different ways. I've always preferred the forum variety, where communities form and discussion is encouraged.
My posts are shared as original content on Lemmy, and that's who it's meant for. Choosing for a traditional blog style to make a more palatable platform for a wider audience, and in this way also promoting Lemmy.
Constraints
Starting off I did not want the upkeep of another federated instance. Not every new thing that is deployed on the Fediverse needs to stand on its own or made from the ground up as an ActivityPub compatible service. But rather use existing infrastructure, already federated, already primed for interconnectivity. Taking it one step further is not a having a back-end at all, a 'dumb' website as it were. Posts are made, edited, and cross-posted on Lemmy.
The world of CSS and JavaScript on the other hand - how websites are styled and made feature-rich - is littered with libraries. Being treated like black boxes, often just a few functions are used with the rest clogging up our internet experience. Even jQuery, which is used by over 74% of all websites, is already 23kB in its smallest form. I'm not planning on having the smallest possible footprint*, but rather showing a modern web browser provides an underused toolset of natively supported functionality; something the first webdevs would have given their left kidney for.
Lastly, to improve maintainability and simplicity, one page is enough for a blog. Provided that its content can be altered dynamically.
*See optimization
How it's made
1. URL: Category/post
Even before the browser completely loads the page, we can take a look at the URL. With our constraints only two types of additions are available for us, the anchor and GET parameters. When an anchor, or '#', is present websites scroll to a specific place in a website after loading. We can hijack this behavior and use it to load predefined categories. Like '#blog' or '#linkdumps'. For posts, '#/post/3139396' looks nicer than '?post=3139396', but anchors are rarely search engine compatible. So I'm extracting the GET parameter to load an individual post.
Running JavaScript before the page has done loading should be swift and easy, like coloring the filters or setting Dark/Light mode, so it doesn't delay the site.
2. API -> Lemmy
A simple 'Fetch' is all that's required. Lemmy's API is extensive already, because it's used by different frontends and apps that make an individualās experience unique. When selecting a category, we are requesting all the posts made by me in one or more lemmy communities. A post or permalink uses the same post_id as on the Lemmy instance. Pretty straight forward.
3. Markdown -> HTML
When we get a reply from the Lemmy instance, the posts are formatted in Markdown. Just as they are when you submit the post. But our browsers use HTML, a different markup language that is interpretable by our browsers. This is where the only code that's not written by me steps in, a Markdown to HTML layer called snarkdown. It's very efficient and probably the smallest footprint possible for what it is, around 1kB.
Optimization
When my blog was launched, I was using a Cloudflare proxy, for no-hassle https handling, caching and CDN. Within the EU, I'm aiming for sub-100ms* to be faster than the blink of an eye. With a free tier of Cloudflare we can expect a variance between 150 and 600ms at best, but intercontinental caching can take seconds.
Nginx and OpenLiteSpeed are regarded as the fastest webservers out there, I often use Apache for testing but for deployment I prefer Nginx's speed and reliability. I could sidetrack here and write another 1000 words about the optimization of static content and TLS handling in Nginx, but that's a story for another time.
* For the website, API calls are made asynchronously while the page is loaded and are not counted
Mythical 14kB, or less?
All data being transferred on the internet is split up into manageable chunks or frames. Their size or Maximum Transmission Unit, is defined by IEEE 802.3-2022 1.4.207 with a maximum of 1518 bytes*. They usually carry 1460 bytes of actual application data, or Maximum Segment Size.
Followed by most server operating systems, RFC 6928 proposes 10x MSS (= Congestion Window) for the first reply. In other words, the server 'tests' your network by sending 10 frames at once. If your device acknowledges each frame, the server knows to double the Congestion Window every subsequent reply until some are dropped. This is called TCP Slow Start, defined in RFC 5681.
10 frames of 1460 bytes contain 14.6kB of usable data. Or at least, it used to.
The modern web changed with the use of encryption. The Initial Congestion Window, in my use case, includes 2 TLS frames and from each frame it takes away an extra 29 bytes. Reducing our window to 11.4kB. If we manage our website to fit within this first Slow Start routine, we avoid an extra round trip in the TCP/IP-protocol. Speeding up the website as much as your latency to the server. Min-maxing TCP Traffic is the name of the game.
* Can vary with MTU settings of your network or interface, but around 1500 (+ 14 bytes for headers) is the widely accepted default
\
Visualizes two raw web requests, 10.7kB vs 13.3kB with TCP Slow Start\
- Above Blue: Request Starts\
- Between Green: TLS Handshake\
- Inside Red: Initial Congestion Window
Icons
Icons are tricky, because describing pixel positions takes up a considerable amount of data. Instead SVG's are commonplace, creating complex shapes programmatically, and significantly reducing its footprint. Feathericons is a FOSS icon library providing a beautiful SVG rendered solution for my navbar. For the favicon, or website icon, I coded it manually with the same font as the blog itself. But after different browsers took liberties rendering the font and spacing, I converted it to a path traced design. Describing each shape individually and making sure it's rendered the same consistently.
Regular vs. Inline vs Minified
If we sum up the filesizes we're looking at around 50kB of data. Luckily servers compress* our code, and are pretty good at it, leaving only 15kB to be transferred; just above our 11kB threshold. By making the code unreadable for humans using minifying scripts we can reduce the final size even more. Only... the files that make up this blog are split up. Common guidelines recommend doing so to prevent one big file clogging up load times. For us that means splitting up our precious 11kB in multiple round trips, the opposite of our goal. Inline code blocks to the rescue, with the added bonus of the entire site now being compressed into one file making the compression more efficient to end optimization at a neat 10.7kB.
* The Web uses Gzip. A more performant choice today is Brotli, which I compiled for use on my server
In Practice
All good in theory, now let's see the effect in practice. I've deployed the blog 4 times, and each version was measured for total download time from 20 requests. In the first graph we notice the impact of not staying inside the Initial Congestion Window, where only the second scenario is delayed by a second round trip when loading the first page.
Scenario 1. and 3. have separate files, and separate requests are made. Taking priority in displaying the website, or the first file, but neglecting potential useable space inside the init_cwnd. We can tell when comparing the second graph, it ends up almost doubling their respective total load times.
The final version is the only one transferring all the data in one round trip, and is the one deployed on the main site. With total download times as low as 51ms, around 150ms as a soft upper limit, and 85ms average in Europe. Unfortunately, that means worldwide tests show load times of 700ms, so I'll eventually implement a CDN.
- Regular (14,46kB): no minification, separate files\
- dev3.martijn.sh/ - Inline (13,29kB): no minification, one file\
- dev1.martijn.sh/ - Regular Minified (10,98kB): but still using separate files\
- dev2.martijn.sh/ - Inline Minified (10,69kB): one page as small as possible\
- martijn.sh/
I'll be leaving up dev versions until there's a significant update to the site
Content Delivery Network
Speeds like this can only be achieved when you're close to my server, which is in London. For my Eurobros that means blazing fast response times. For anyone else, cdn.martijn.sh points to Cloudflare's CDN and git.martijn.sh to GitHub's CDN. These services allow us to distribute our blog to servers across the globe, so requesting clients always choose the closest server available.
GitHub Pages
An easy and free way of serving a static webpage. Fork the BlogOnLemmy repository and name it 'GitHub-Username'.github.io. Your website is now available as username.github.io and even supports the use of custom domain names. Mine is served at git.martijn.sh.
While testing its load times worldwide, I got response times as low as 64ms with 250ms on the high end. Not surprisingly they deliver the page slightly faster globally than Cloudflare does, because they're optimizing for static content.
Extra features
- Taking over the Light or Dark mode of the users' device is a courtesy more than anything else. Adding to this, a selectable permanent setting. My way of highlighting the overuse of cookies and localStorage by giving the user the choice to store data of a website that is built from the ground up to not use any.
- A memorable and interactable canvas to give a personal touch to the about me section.
- Collapsed articles with a 'Read More'-Button.
- 'Load More'-Button loads the next 10 posts, so the page is as long as you want it to be
Webmentions
Essential for blogging in current year, Webmentions keep websites up-to-date when links to them are created or edited. Fortunately Lemmy has got us covered, when posts are made the first instance sends a Webmention to the hosters of any links that are mentioned in the post.
To stay within scope I'll be using webmention.io for now, which enables us to get notified when linked somewhere else by adding just a single line of HTML to our code.
Notes
- Enabling HTTP2 or 3 did not speed up load times, in fact with protocol negotiations and TLS they added one more packet to the Initial Congestion Window.
- For now, the apex domain will be pointing directly to my server, but more testing is required in choosing a CDN.
- Editing this site for personal use requires knowledge of HTML and JS for now, but I might create a script to individualize blogs easier.
Edit: GitHub | ./Martijn.sh > Blog
Feather ā Simply beautiful open source icons
Feather is a collection of simply beautiful open source icons. Each icon is designed on a 24x24 grid with an emphasis on simplicity, consistency and readability.feathericons.com
like this
PewDiePie using Hyprland on Arch wasn't on my 2025 bingo card!
What he did was half a decade ago! The comment section is wiled, keep it down.
I saw this in my Mastodon feed and wanted to share, and that was a mistake.
Edit: I label myself an anarcho-syndicalist, and I donāt watch PewDiePie. I have my fair share of opinions about him from his early days but there is no need to label Felix as a nazi. I used my brain cells to check some of his latest videos and I donāt see any mention of nazism fascism or any political mentions! What I do see is Felix starting a family in Japan, traveling around Japan, and just being a human living his life!
PewDiePie: I installed Linux (so should you)
I don't normally watch him but this popped on my feed, and I'm pretty impressed. Dude really fell the Arch+Hyprland rabbit hole and ended up loving it.
Probably one of the largest YouTuber switching to Linux, and is very positive about it.
That Hyprland rice is pretty sick too.
I installed Linux (so shouldĀ you)
#ad - Shop Gfuel sale: https://creator.gfuel.com/pewdiepie š§#Subscribeš§š° Get "The Kjellberg Mail" (family newsletter w Mertz): https://the-kjellberg-mail....YouTube
It's good, and it's funny. So much so that I'm jealous.
With this potential critical mass combined with the gaming community it's all downhill for Windoze from here.
PS To force GPU on Steam games in Linux, because games might unknowingly perform needlessly bad.
-āÆ-
āļø arscyni.cc: modernity ā nature.
arsCynic: modernity ā nature | Angelino Desmet
A sentient stack of stardust's thoughts on nothing and everything, influenced by Cynicism, pursuing modernity in proportion to nature.www.arscyni.cc
The power of Linux
Today i took my first steps into the world of Linux by creating a bookable Mint Cinamon USB stick to fuck around on without wiping or portioning my laptop drive.
I realised windows has the biggest vulnerability for the average user.
While booting off of the usb I could access all the data on my laptop without having to input a password.
After some research it appears drives need to be encrypted to prevent this, so how is this not the default case in Windows?
I'm sure there are people aware but for the laymen this is such a massive vulnerability.
This is not that big of a deal most of the time, since you are the only person interacting with your computer, but it's worth remembering when you decide to recycle or donate -- you have to securely wipe in that case. Also bear in mind, if you do encrypt your drive, there are now more possibilities for total data loss.
Oh, fun fact: you can change a users windows password inside Linux. Comes in handy for recovery, ie, user forgot their password.
Yes, any laptop without an encrypted storage drive will have its data accessible by someone booting from a live USB.
It really is a massive vulnerability, but it's not well known because so few people even understand the concept of a 'live USB' to make it a widespread threat or concern.
So yeah, if you're ever in possession of a Windows machine that doesn't have an encrypted disk, you can view the users' files without knowing their password via a live USB.
It's also not limited to laptops.
I installed Linux (so should you) video by PewDiePie
I installed Linux (so shouldĀ you)
#ad - Shop Gfuel sale: https://creator.gfuel.com/pewdiepie š§#Subscribeš§š° Get "The Kjellberg Mail" (family newsletter w Mertz): https://the-kjellberg-mail....YouTube
System76 Releases COSMIC Alpha 7 Desktop - Last Step Before Beta
COSMIC Alpha 7: Never Been Beta
Wrapping up COSMICās main features before the first beta release.System76 Blog
The developer is just kind of insane. They reimplemented wlroots from scratch all on their own, a feat that cannot be understated, and the reason they did that is because of how massively they were outpacing wlroots development in terms of features.
just some things:
- a full proper animation stack for eyecandy
- single window capture (sway still doesn't have this)
- keypress forwarding to specific windows (like xdotool)
- global hotkey support
- insanely good documentation: wiki.hyprland.org/
- color management and HDR (sway is just now getting this)
- proper permission management for screencopy (coming in the newest version, first compositor to implement)
- a full plugin system for extra things you want to do
- a proper app not responding dialog
If the feature exists, hyprland has it, almost guaranteed, they are not minimalists, which I appreciate right now while wayland is still getting everything sorted out.
Oh they finally fixed the bloody dock. I used it ages ago when it was still oval and those square icons looked terrible on it. This is much better.
Have they given any updates on HDR support? I remember them mentioning years back that they planned it for the final release but I've not heard anything since.
Secure Boot on or off with Mint?
Cool story bro. And I am one of the 9 people that worked on the team at Intel to implement your modern EFI/UEFI.
I just donāt have the time or energy to sit here and explain the whole fucking stack to a bunch of people who mostly could care less. But, Secureboot, itās a good thing, and the tools on linux get better every hour. Check out lanzaboote.
github.com/nix-community/lanzaā¦
GitHub - nix-community/lanzaboote: Secure Boot for NixOS [maintainers=@blitz @raitobezarius @nikstur]
Secure Boot for NixOS [maintainers=@blitz @raitobezarius @nikstur] - nix-community/lanzabooteGitHub
Fair. Although, I consider Microsoft's market "Most laptops" since Apple kind of does its own thing and Chromebooks are ultra-low end laptops. Thus Microsoft gets ~95% of the market for themselves.
Personally, I'd say that's a clear case of monopoly since MS controls this entire segment of "non-Apple, non-ultra low power laptop, PCs", but you're right - there are other players. The thing is, they have relatively tiny niches in which they thrive and in fact pose no threat to the monopolist.
But I now I see how you see it as an oligopoly, which is quite valid.
With FOSS, what is to stop scammers from hiding malware or worse in their programs?
Something I've wondered. One of those "too good to be true, it probably is" type things. With all the FOSS especially for linux, installing package after package because a web search said it would fix your problem, how is it Linux isn't full of malware and such?
Id like to understand better so I can explain to others who are afraid of FOSS for those reasons. My best response is that since it's open source, people can see what it's doing and would right away notice something malicious. I wouldn't, since I'm not that into code, but others would.
SOLVED GNOME extensions stopped working after upgrade to Fedora Workstation 42
I've just upgraded to Fedora Workstation 42 and am now unable to activate any GNOME extensions. The little switches in the GUI do not respond and it's the same for all extensions. The Extensions and Extensions Manager apps are both installed as flatpaks - do I need to adjust their permissions in Flatseal? Is the problem due to something else? Thanks!
Edit/solution:
I totally missed the 'Use Extensions' switch at the very top. All my extensions are working on the current GNOME version (48) now. I am the most silly. Hopefully the other solutions in the comments will be useful to someone else in future š
like this
As others have pointed out, the extensions are likely not (officially) compatible with the new version of GNOME yet. Which extensions are you having trouble with?
There are a couple of extensions that are available for installation through dnf for which Fedora takes care of making them compatible at the same time at which they make available a new version of GNOME. Caffeine and Dash To Panel are two examples. For a full-ish list, try dnf search gnome-shell-extension
.
Alternatively you can also try manually editing the extension's metadata to "make it compatible". Your mileage may vary with this approach, but it worked fine for Net Speed Simplified, for example.
This is really annoying.
Iām trying to use as little extensions as possible so I only use 4. 2 out of them havenāt upgraded to 48 yet and arenāt usable for now.
This is especially annoying because Iām trying to respect GnomĆ©s philosophy with my extensions..
Correct way to configure tc rules?
OS: Ubuntu 24.04
I have searched this for a while and seems i can't get my search terms right.
Back when ifuo/down system worked custom scripts were put under '/etc/network/if-up.d' etc. Now ubuntu uses netplan. But where to put custom script? That would handle tc rules in my case.
/etc/networkd-dispatcher/routable.d was told by internet but that just trows error during boot; ERROR:Unknown state for interface.
All i know now and looking more around; netplan way to use custom script when interface comes up is networkd-dispatcher way, that in Ubuntu 24.04 do not work.
Ahh I see, I didn't know what tc was and assumed it was a typo and ignored it. I searched for a bit for your specific problem and didn't come up with much other than this:
You could also try
/usr/lib/networkd-dispatcher/routable.d/
Looks like you can also specify the scripts directoy with -S flag
manpages.ubuntu.com/manpages/nā¦
My other thought is: maybe the location for the scripts is correct, but you're having another issue thats causing the unknown state error?
my new experiences with KDE Plasma and GNOME!
I havenāt tried Linux in a while and only really played around with XFCE and Cinnamon and reviving my old laptops, but Iāve just tried KDE Plasma and GNOME for a bit and DAMN they look good. Modern looking and not the weird Mica effect that Windows has. Very clean!
They both look great and I wouldnāt say one looks better than the other, just preference probably, just that GNOME looks more bubbly + rounded + bit like MacOS in a good way and Plasma looks more blocky + similar to Win10 taskbar
The touchscreen buts still appear to need a bit of work, on both Plasma and GNOME I made it freeze. For Plasma I opened the launcher button and tried to use the onscreen keyboard, and it kept on opening and closing very quickly, for GNOME I did the three finger swipe up gesture and everything became unresponsive. Also, Bluetooth weirdly doesnāt work on KDE but does on GNOME. Huh. Maybe just my device?
I really want to switch soon, maybe during the holidays Iāll get round to it š
edit: I think itās pretty crazy that a relatively small team (compared to the likes of Microsoft) can offer such a good UI and overall user experience! Thatās insane! The people who help make the distros are doing very good work and I wish them the best of luck! Hopefully the weird quirks and compatibility issues will iron out and Linux becomes mainsteam š
Signalgruppe für Linux-Einsteiger auf deutsch!
Ich habe eine Linux-Einsteigergruppe auf deutsch erstellt! Das Ziel ist, ohne Sprachbarriere ausschlieĆlich bei Linux zu helfen und darüber auszutauschen.
Fortgeschrittene willkommen, um Wissen zu teilen!
Ok dann fickt euch halt XD
Auf der cooleren Instanz gibts bessere Laune
Email client recommendations ?
Hi, I tried using an email client over a year ago, and after trying almost all of them in the span of a week I gave up in frustration. Would anyone have a recommendation ? For an email client :
- That is actively maintained
- That is not controlled by a company that could pull a Mozilla on it (Thunderbird)
- That doesn't need 77 dependencies and 450 GB (WTF KMail š )
- That is reasonably fast and light and not too bloated (I just want to read emails, I don't need a full app suite...)
- That supports POP
- That supports writing HTML messages (sorry Claws, I really liked you but occasionally I kinda need to write formatted messages to preserve other people's sanity š
)
- That supports reading HTML messages without showing the HTML version as attachments so that every single email has the paperclip icon and I can't tell which messages have real attachments (Sylpheed I think ?)
- That supports MailDir format for portability (why isn't it the default everywhere already instead of weird non-portable formats ? š )
- If possible, that doesn't have an interface that's so awful it's a pain to find anything (Thunderbird)
I also tested Geary and another one but I don't remember much about it... I can't find out whether Geary does support POP and maildir, its documentation page is... well it's a list 8 lines long, but on a page called "Documentation" so it's technically counts as documentation I guess ? š
wiki.gnome.org/Apps/Geary/Docuā¦
Any recommendation would be greatly appreciated !
Lmao that's what ChatGPT recommended after I ranted about all the email clients I had tried š
fetchmail/getmail6 to fetch the mails via POP3 in maildir format + a local roundcube server + CLI tool to still be able to read mails outside home
but I thought I might be a bit overkill š
How much of a pain is it to install Nvidia GPU drivers, really?
Installing Nvidia drivers from official repos provided by the maintainers of your distro? Easy as pie.
Installing Nvidia drivers from nvidia's website? Good luck my friend, I hope you know what you're doing.
Barely a week later and I had to do the thing. My partner uses LMDE and Nvidia 535 is the newest version in their repos, but we need nvidia 565+ for Kingdom Hearts 3.
Installing from the website wasn't as hard as I remember.
- Blacklist Nouveau.
- As root, without an X server running, run the nvidia*.run file from the website
- Follow the prompts.
- Verify your initramfs rebuilt correctly before rebooting.
- Reboot and enjoy your actually current driver.
Immich 1.132 Brings Smoother Syncing, Mobile UI Enhancements
Immich 1.132 Brings Smoother Syncing, Mobile UI Enhancements
Immich 1.132 self-hosted photo and video backup solution replaces TypeORM with Kysely, introduces SQLite support, and enhances sync performance.Bobby Borisov (Linuxiac)
:ro
(read only).
Is it still braking changes when upgrading to a newer version?
In the past it felt like I was running an alpha version, which I spend more time fixing it than enjoying its features.
It's not too bad once you get used to it. It's still a lot of "throw this color here, check results, looks shit, change color, rinse and repeat." QT theming is pretty similar.
I had just taken days to perfectly set up my homemade theme last distro, matching QT and GTK, only to find out I didn't like the distro. I gave up after that and just slapped Gruvbox Dark on everything.
When in doubt and the work to theme gets too much: Gruvbox, Dracula, Tomorrow/Tomorrow Night, or Solarized will cover just about everything.
Do I need to update Windows 11 on a Windows / Linux Mint dual boot system?
Hey guys,
I use my laptop with a Windows 11 / Linux Mint dual-boot system.
Since I actually use Linux Mint 98% of the time, I wanted to ask if it is still necessary to do the system and security updates for Windows 11 as long as Windows is not needed?
Thanks for the answer. I was unsure whether it could be dangerous to the PC itself if you ignore the updates for both operating systems.
Yes, I've also read about problems with dual-boot systems after Windows updates, which is why I've refused to use Windows too often to make the updates worthwhile.
Yes, Iāve also read about problems with dual-boot systems after Windows updates, which is why Iāve refused to use Windows too often to make the updates worthwhile.
Sometimes Windows just overwrites GRUB (or whatever you use on your system) bootloader. But it's relatively easy to fix using your distro's installation media. Just in case this happens you need to refer to your distro's documentation or community forums to fix it.
I do recommend however in the future to not put Windows and Linux on the same disk, but have 2, each for respective OS. That way, there's no way Windows will touch your Linux bootloader on the other disk, and you can still allow GRUB (or other bootloader) to chain-load Windows boot manager from the other disk.
Exploiting Undefined Behavior in C/C++ Programs for Optimization: A Study on the Performance Impact
A thorough examination of the performance effects of using undefined behaviour in compiler optimizations.
Method:
1. Modifying clang to not use UB where this is possible
2. Run a large suite of benchmarks on different architectures, compare results for modified and unmodified clang
3. Do statistics on the results
4. Examine performance deviations
5. Discuss factors which could bias results.
Very good science!
Result in short:
Only on ARM and if no link-time optimization is used, a systematic small positive performance effect can be seen. For Intel and AMD CPUs, there are no systematic improvements.
Average effects are typically below 2%, which is the typical effect of system and measurement noise. Often, effects are even negative. In some cases, benchmarks show large differences, and many of these can be fixed by simple modifications to the compiler or program.
For me, this result is also not too surprising:
- If allowing / using Undefined Behavior (UB) would allow for systematically better optimizations, Rust programs would be systematically slower than C or C++ programs, since Rust does not allow UB. But this is not the case. Rather, sometimes Rust programs are faster, and sometimes C/CC++ programs. A good example is the Debian Benchmark Game Comparison.
- Now, one could argue that in the cases where C/C++ programs turned out to be faster than Rust programs, that at least im these cases exploiting UB gave an advantage. But, if you examine these programs im ythr Debian benchmark game (or in other places), this is not the case either. They do not rely on clever compiler optimizations based on assumptioms around UB. Instead, they make heavy use of compiler and SIMD intrinsics, manual vectorization, inline assembly, manual loop unrolling, and in general micro-managing what the CPU does. In fact, these implementations are long and complex and not at all idiomatic C/C++ code.
- Thirdly, one could say that while these benchmark examples are not idiomatic C code, one at times needs that specific capability to fall back to things like inline assembly, and that this is a characteristic capability of C snd C++.
Well, Rust supports inline assembly as well, though it is rarely used.
RustĀ vsĀ C++ g++ - Which programs are fastest? (Benchmarks Game)
Rust C++ g++ - Which programs have fastest performance?benchmarksgame-team.pages.debian.net
Why doesn't the Linux subreddit leave Reddit already?
It's kind of ironic to me that Linux is all for free and open source, but still uses a proprietary platform, and a horrible one at that. Before the fediverse, I'd understand, but now, there is no excuse whatsoever.
I understand that we can't just get up and leave everything proprietary behind all at once, since we have iPhones and Android phones. We all use proprietary software of some form, but I am of the mindset of using the least amount of proprietary possible.
I will ALWAYS look for FOSS first. I also want to make it as hard as possible for any corporation to track me. They'll probably still be able to track me, but I'm not going without a fight.
I could say the same about the Linux kernel using GitHub, but I understand how massive of an undertaking it would be to move the whole kernel to another platform. I'm sure there are other factors, too. Anyway, I just wanted to start a discussion and hear people's thoughts.
Thank you
Permissions issue setting up Plex
Hey all, I'm stumped for the first time since adopting Linux. I can't get Plex to see any of my folders and I cannot just move my movies to plexmediaserver because I don't have the permissions.
I'm having a hard time wrapping my head around the permissions commands and I'm not sure what the simplest way to set up my Plex library is. Has anyone been through this process that can help me out?
I remember running into this as well. It's because Plex installs itself with its own user. So post-install, you need to add the Plex account to your user Group and restart the service.
sudo usermod -a -G plex
sudo service plexmediaserver restart
Two commands and bam! You're in business.
ref: askubuntu.com/questions/458547ā¦
I cannot get Plex server to see any directories
I have tried adding a directory outside my home directory. /media/plex and soft linked my /Videos directory to it. I have also added plex to udev group and also to my user group uid=124(plex) gi...Ask Ubuntu
What kind of mindset do you need to be succesful starting and continuing to use Linux.
We all have opinions on how to procedurally get someone started using Linux. To mixed effect. I wonder if we could be more successful if we paid closer attention to the machine between the seat and the keyboard. What mindsets can we instill in people that would increase the likelihood they stick with it? How would we go about instilling said mindsets?
I have my own opinions I will share later. I don't want to direct the conversation.
Back in the mid 2000s, we (my company) were on Windows, including three Windows 2000 Server licences. And we needed to upgrade. But it wasn't sustainable for the small company to pay for all these licences, when a free option was available.
So we slowly moved all applications over to cross-platform alternatives, Outlook to Thunderbird (called Firebird in those days), office to OpenOffice (now LibreOffice), Internet Explorer to Firefox, Corel Draw to Gimp, Company software like accounting to a XAMPP stack etc.
Once this was established and running well, we just changed the underlying platform from Windows to Ubuntu/Gnome, cursed for a few days and went on with our lives. And it worked for the past 20 years and counting. Now I am cursing, when I am forced to use Windows and can't find my butt using it.
So the mindset, if you want, was that of methodical planning and going slow, step by step. This is likely different if you're a gamer, or you need some very specialised apps, but for me, this was not the case. The games that I play, like Sudoku and Solitaire, work on any platform.
i guess convenience seekers can have linux these days. ppl don't care for the os, only for "the programs" they "need". i was agnostic to e.g. office suites (i hate em from the bottom of my heart) long before i considered trying a switch. that helped, i guess. a feature, that can only be reprocessed with a certain version of licensed software is fundamentally bullshit.
i wish people hadn't told me abt dual boot but using wine properly (or running a vm?). for windows will fuck up your boot section and that's very scary the first time, alone.
the only problem i see, is the upcoming dependency on copilot ... just leave those ppl be.
instead teach the willing some fundamentals:
- piping ps through grep and use kill is not intuitive for the windows user.
- the packaging system the distro comes with (idc, just call it 'the appstore').
- show them software, there are ppl who arent aware, how e-mail works, and that you can have "your outlook in thunderbird or whatever"
- show them how to find solutions, and teach them how to read the shell commands they'll find. (+ the jokes abt rm .. they dont need to understand it all, but be sceptical before running any 3 lines found on the net.)
- ...
- really, its usually abt games. they come from steam. they got proton. teach ppl how to use steam! (and only after that tell them not to buy software that doesn't run on linux natively!)
I'm grateful for being able to have a choice like Linux or even BSD family instead having only two proprietary choices: Windows or MacOS.
$ rg -li bsd /usr 2>/dev/null | wc -l
1035
en.wikipedia.org/wiki/Darwin_(ā¦
OS family: Unix-like, FreeBSD, BSD
But sure, it's not a true ~~Scotsman~~ bsd.
What was your first Linux distribution?
I'm new to #Lemmy and making myself feel at home by posting a bit!
My first Linux distribution was elementary OS in early March 2020. Since then, Iāve tried Manjaro, Arch Linux, Fedora, went back to Manjaro, and since early January 2023, Iāve landed on Debian as my home in the #Linux world.
What was your first Linux distro?
The thoughtful, capable, and ethical replacement for Windows and macOS ā elementary OS
The thoughtful, capable, and ethical replacement for Windows and macOSelementary.io
Switched from Ubuntu to Debian this year. With one extra GNOME package install, its basically the same without snaps, so perfect for me.
@trk@aussie.zone @ing since you mentioned Ubuntu. I also switched from Ubuntu Server to Debian for the servers, too.
I recently got a fancy wireless Steelseries headset, and since I'm probably going to switch to Linux in the future I'm a bit worried about the continued functionality of it in a non-Windows setting.
It also includes a mixer for managing multiple audio sources.
Isn't that just a normal part of the/any operating system?
like this
don't like this
I switched from macos to Linux because it can't stop babying users and being unnecessarily restrictive
I tried running a 2nd instance of Roblox simultaneously on macos 15 with another account but this shows up, if my mac can handle it then why can't it just let me do it? If I have two copies of an app like Roblox in separate User/Applications folders, macos moves them to the /Applications/ folder.
Sometimes it won't run apps claiming to be corrupted, so I then have to do sudo xattr -cr /Applications/someapp.app
in the terminal and they run perfectly fine. It always nags me if I download apps from anywhere but mac app store. Some of these messages can only be gotten rid of by disabling system integrity protection, but then macos blocks you from running MAS apps due to having "permissive security".
I don't daily drive macOS anymore, I switched to Linux on my M1 mac where I can do whatever the hell I want.
There are many different signals the OS sends to applications which are kinda like "Can you kill yourself?" or "Please kill yourself" or "I will kill you" to close it. In computer teminology, there is "close", "terminate" and "kill" types of signals. These are used so that applications can have time to perform closing tasks (like saving) when neceassary and if they misbehave, just "kill" it.
Now both windows and linux have these types of signals. In fact every OS has it.
I beleive this is the reason this meme exists:
When the user tries to shut a app in windows (throught close button or task manager) windows will wait and not give any option to immediately kill the app. Hence some apps don't close even after using end task. Only if the app freezes for some time will it give the option for force quit, ~~no other way~~ (edit: it exists). In linux, its the same as windows and limux waits for app to close. But the difference is that option to kill is available anytime in linux and basically gives the user full control. Although kill option in linux may be hidden as a way for users not to use it unless necassary as applications may not like it.
Shutdown process of both OS is same, they wait for all apps to shut by semding "please close" signal and if they misbehave, option to "shutdown anyways" will be shown to the user, basically killing all apps.
The meme is not correct and is just a steorotype of different OSes. This steorotype comes from how people normally experinece different OS culture and practices. Both OSes have same process of managing apps. Both OSes will wait for process to close if it freezes and give option to user to force quit.
SIGKILL in the meme is coreect only for the right panel of the meme and the left panel is actually a SIGTERM (or something else which means "please close", don't remember)
The only thing the meme should emphasis is how the user is given full control to do in linux (even deleting the kernel) while windows is careful to not let users do something stupid.
Edit: Killing apps in windows can be done on demand through cmd using taskkill
command
OpenMandriva Lx 6.0 Rock The Spring Release ā OpenMandriva
Happy Easter holidays!
we made fruitful use of this time to provide you a nice surprise.The independent, community controlled distribution OpenMandriva Lx 6.0 fixed point release (as opposed to the rolling release branch), is out right now.
OpenMandriva Lx 6.0 Rock The Spring Release ā OpenMandriva
Happy Easter holidays! we made fruitful use of this time to provide you a nice surprise. The independent, community controlled distribution OpenMandriva Lx 6.0 fixed point release (as opposed to theā¦OpenMandriva
Nice to see that Mandriva is still alive and kicking. Used it when it was still "Mandrake". Started as a recompile of RedHat optimized for 586, iirc.
What's the USP for Mandriva these days?
Mandriva is gone, but there's a couple of projects carrying its legacy. OpenMandriva is one of them, obviously. Mandrake was my first distro too, so I have a soft spot for it.
From my perspective, OpenMandriva's biggest strengths are that it's independent, non-derivative, community driven, and based in Europe. Unfortunately it's also small, but the people behind it seemingly do a lot with very little, so the community is passionate about the project.
Personally I'm just happy that there are smaller, non-corporate distros still out there providing alternatives. And OMLx seems like a pretty solid distro at that.
For their selling pitch, you can check their FAQ.
I switched from Manjaro to CachyOS and OMG!
I first started using Manjaro after being on Debian/Ubuntu derivatives for years. Mint used to be my daily driver, then LMDE for a while. After struggling with Endeavour OS, through 2 or 3 breaking updates requiring a reinstall I made Manjaro with KDE Plasma my home for several years.
Manjaro was stable and, I thought blazing fast, compared to Mint. Everything just worked and was cutting edge. I thought my distro hopping days were over and I found the one that works for me.
Recently I've been reading about Cachy OS and decided to give it a whirl on my test Dell Latitude. Turns out that, I had no idea how fast and lean Linux could be on that off-lease business laptop! I know have it installed on my main Laptop and it's leaps and bounds faster than Manjaro, has none of the bloat and just works! I know it's early, but I think I have found a new home! I have timeshift set up just in case, so I'll see how stable it is over the next few months, but so far I am impressed.
Highly recommend everyone who's into Arch and rolling release to try it.
[meta] is your post discussion or user support?
i get a little annoyed at posts that start with broad statements like "is linux actually ready for the average user?" but then it's just someone asking for help to fix a problem they have with their sources.list or whatever. it's not a massive problem, but it's misleading and it feels borderline inflammatory sometimes
please tell when you're asking for help
ty
how could this idea work withthe fediverse
A revolution is supposed to change things. Looking at things today, the only revolutionary idea left is to make society reflect the best of us instead of the worst. Most people prefer kindness and love. But lacking these values allows others to thrive in our world. They spend their time deceiving and exploiting the rest of us, people trying to enjoy life and things that bring joy and love. We can't do that by spending all our time dealing with the sad creepy weirdos ruining everyone's lives. So they're been able to shape the world. The only revolution left is to build something to undo what they've done. We need a force for love in the world.
This begins with anyone who thinks it's silly to to expect love to play a major role in society and our future. They have to question who taught them how the world works. They have to wonder why they think that way ā because the power of love is not a revolutionary concept. Something else convinced them.
Education is designed by politicians also responsible for war; news comes from corporations whose purpose is exploitation of anyone and thing possible. No wonder people think a more loving world seems like fantasy. Everything seems designed to make us think so.
The internet makes it undeniable that knowledge and tech are fueling hate, greed, ignorance in every heart, every family, community, country. What's not so obvious is how to teach people what's wrong: that knowledge should not be controlled by politicians and the rich.
If we want a revolution to actually change things, it means we need to liberate knowledge from politicians and the rich. A goal like that depends on people understanding why people don't understand it. So instead, we could hope the state of the world's enough to convince people what's wrong.
To literally free knowledge, we have to free the people responsible for it, every individual and group, all the research universities, all focused globally on the same goal: to save the future. What's more loving than that?
The key to a revolution based on love in the world is to build something free of the people who disagree, the hateful, greedy, ignorant, whatever. That's possible with the internet, where we can work together to organize ourselves, our knowledge, our resources.
The first, most important step in the only revolution we have left is to create our own democratic corporation. The only way we can confront the multinationals exploiting us is with our own. The concept of democorporation coordinates all people and groups worldwide, anyone free to share their knowledge how to build this future. It begins with whatever individuals, corporations, institutions of knowledge who don't require liberating to help. They can help free the others, they can set the foundation so Democorporation can challenge the multinational corporations pillaging the planet and threatening the future.
Democorporation can only begin in as a social network because that's how the people can best support it. Participation, data, advertising can help funding. But more importantly is to be democratic. People need this network to vote and express how to build their future. With online users, volunteers, donors, employees and investors all expressing their perspectives, they offer the most balanced democracy and leadership possible.
When we have a social network that we own, uniting the world in our own democracy, we achieve the goal of any great revolution: we establish our own republic. Interepublic has the benefit of a corporation being able to limit, exclude and fire people who don't want it to succeed. That overcomes the problem of real world countries: we're all stuck with people who want our governments to fail, who want others to suffer. As antidotes to the hate in society, Interepublic and Democorporation become outlets of love for the world. That's what we're missing, and it's all we need to change history.
This revolution is global resistance against everything dividing us and everyone exploiting us. It is the ārebellion of people coming togetherā. Interepublic and Democorporation use the internet to create leverage that's never been available before. But only if people agree love is necessary to fix what's wrong. Nothing else will bring us together.
That sounds good until you remember there is no planning how to capture love. You just express it and hope your love is reciprocated...but this isn't a teenager working up the courage to call his first love. This is the world, and all these ideas and plans that express my love only work if people understand the love I've already poured into it.
To make this revolution truly new, truly revolutionary, it begins with something as intimate and personal as love, one stranger to another. So when I profess my love for the world, I risk the worst kind of heartbreak imaginable. Maybe that risk is proof enough? To trust who I am, my motivation, I have to be honest even it's humiliating for me. That's how to explain what it took for me to do this. No one happy with the world would.
Because here's the thing...I do not want to do this. I think it's inhuman and inhumane to be put in a position like this. It is a constant fight against myself, doing what's right while ruining my life. I'm losing because the world keeps getting worse, so I feel sick with guilt and torture more ideas out of myself.
And I can't describe this inner conflict without describing my the kind of sad life that makes someone do this. If I loved myself enough, I would never be in this position. It is a living nightmare. Doing this means I love the world more than myself...too bad it feels like such an abusive relationship.
Think about it: someone's not gonna spend their life on this, decades of trial and error, if they've experienced the best of the world, love, family. My life began in stress. Now that I accomplished my goal, I'm left psychologically devastated by it. I'm in a place of responsibility no one should be. And worst of all it seems to piss people off that I even tried?
Your reaction decides if this love is reciprocated. If it is we create a love story like no other. And if not, I can hope failure and tragedy might do a better job of finding help than if I'm alive. A win-win for the world, just not for me...what's more loving than that?
For me personally, my love for the world would be reciprocated by freeing me from this stress and responsibility. Maybe the most revolutionary thing here is that I want nothing to do with politics or business. This is a first step in an ongoing process to remove myself from this insanely stressful situation, and it's quite elaborate.
The most genius thing I did was to create a story/fantasy/metaphor/game that lets me help without being directly involved. Two birds, one stone. If I can make it entertaining, I can earn money and raise attention. Four birds, one stone.
To put this in context, I've set up a political-economic-societal plan, but I also imagined a metaphorical story to promote knowledge the way religions promotes belief. It's a modern mythology, and it's for people how can't understand the liberation of knowledge. The goals for the real world and the fantasy story are the same: a search for a more loving world. And they begin the same: with your choice what happens to me.
Our world is built on the same choice we make whether to help others or not. If people form this bond with me and help me survive what's coming, those bonds, the knot of love and connection form the foundation for this revolution and the loving world it would create.
Love would be the seed for everything that grows from here, so Interepublic and Democorporation are literally born and grow shaped by love. And the world gains what it's missing most, a force to fight for the best of us against the worst.
like this
don't like this
like this
Faster alternative to Evince for PDFs
Btw, I use Arch (via EndeavourOS*)
Hopefully this kind of post isn't too tired, but I figure it's my turn:
Finally decided to, after absolutely refusing to upgrade to 11, make the jump from Win10 to Linux! Been hopping around distros a bit and landed on EndeavourOS last night and I'm really enjoying it so far.
It's definitely tinkery and took me like 2 hours just to get my push to talk working in Discord (mostly due to my own lack of knowledge), but I love the level of control of everything you have (was on Pop!_OS before ~~š¤®~~, edit: no hate, just wasn't for me!)
There's definitely never been a better time to switch and I'm very excited for when I inevitably brick my shit and come back here for help, so thanks in advance everyone! š
Yea im about to switch myself. Been looking at suggestions and stuff, probably gonna start with Mint myself.
Many different sources advise putting it on a flashdrive first and loading from there, to start. Make sure I like it.
But the end goal, eventually, would be to remove windows from the comp entirely, right? Eventually installing my chosen distro as the OS on the computer itself? Does that sound about right?
For me, I've been throwing distros on a spare SSD so I could test run in a proper install, but I'm sure a thumbdrive would be fine. Just keep in mind that you might get some hangs and things will be slower due to the speed of the drive, rather than the inefficiencies of the OS you end up on. If you want to test out specific programs or games or something, you can always do what I did and put them on a separate faster storage drive (I'm on SATA SSD for my OS right now, but am putting other things on NVME).
As I mentioned elsewhere, I still have my Windows on another drive so I can boot to it if I need to, but I honestly haven't needed to even once since switching, so I'll probably end up just switching to VM only for anything that requires Windows fairly soon here.
The transition has been much simpler and smoother than I ever had imagined.
NVK enabled for Maxwell, Pascal, and Volta GPUs
NVK enabled for Maxwell, Pascal, and Volta GPUs
NVK is now a conformant Vulkan 1.4 implementation on NVIDIA Maxwell, Pascal, and Volta GPUs!Collabora | Open Source Consulting
Thanks for such a detailed explanations. That's what I meant, I would love to avoid official drivers headache that causes you to avoid recommend Nvidia. Still there are some things that you cannot avoid it. Things I have in mind are better than AMD / Intel GPU with Mesa:
- Blender
- ML / AI / CUDA and so on
- DaVinci Resolve (and other creative stuff like Blender above)
- RayTracing
- DLSS (FSR is catching up but this is #1)
I would love the Nvidia support just to be stable.
for the encoding and decoding I would choose Intel.
For gaming AMD as I'm currently right now with Bazzite.
At least the situation will get better.
Nouveau's kernel driver is a horrible mess, so I'm looking forward to Nova, if it ever gets ready.
For older (pre-about-RTX 2000-series) cards, the kernel driver had to do a lot, and Nouveau had to reverse engineer most things. Now, Nvidia has moved most of the proprietary magic into something called the GSP (GPU System Processor), which is a small processor (RISC-V, IIRC) which does many things the kernel driver did previously, like reclocking. This, in addition to the official open kernel drivers should make developing a new FOSS Nvidia driver a lot easier. RedHat's Nova (and I think Nvidia's open driver) only support cards with a GSP for this reason.
NVK is very impressive for such a new unofficia Nvidia driver in my opinion. If I remember correctly, they said that they'll focus more on optimization now that it's conformant.
When/if Nova is ready, it will finally be possible to use a Rust graphics driver stack on Linux outside of Asahi.
If you have any questions remaining, just ask.
Edit:
So the closed source GSP firmware blob has 3 "good" points:
1. The closed source parts are limited to inside the GPU.
2. It moves a lot of work away from the kernel driver.
3. It allows open source drivers to support HDMI 2.1 & later.
The HDMI Forum decided some time ago that HDMI was too open. Now, for the newer versions, the license doesn't allow open source implementations. Nvidia gets around this with proprietary GSP firmware inside the GPU (even with official open source drivers, not sure about Nouveau) and Intel with GPU firmware or an internal adapter, depending on the GPU (if I've understood correctly). Only AMD doesn't support the newest HDMI version.
4k@120hz unavailable via HDMI 2.1 (#1417) Ā· Issues Ā· drm / amd Ā· GitLab
Brief summary of the problem: I have a RX 6800 XT connected to a LG B9 TV via HDMI. As both...GitLab
Decentralization Scoring System (v1.3)
This scoring system evaluates how decentralized and self-hostable a platform is, based on four core metrics.
š Scoring Metrics (Total: 100 Points)
Metric | Weight | Description |
---|---|---|
Top Provider User Share | 30 | Measures how many users are on the largest instance. Full points if <20%; 0 if >80%. |
Top Provider Content Share | 30 | Measures how much content is hosted by the largest instance. Full points if <20%; 0 if >80%. |
Ease of Self-Hosting: Server | 20 | Technical ease of running your own backend. Full points for simple setup with good docs. |
Ease of Self-Hosting: User Interface | 20 | Availability and usability of clients. Full points for accessible, FOSS, multi-platform clients. |
š Example Breakdown (Estimates)
Platform | Score | Visualization |
---|---|---|
š§ Email | 95 | š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š© |
š¹ Lemmy | 79 | š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š© |
š Mastodon | 74 | š©š©š©š©š©š©š©š©š©š©š©š©š©š©š© |
š£ PeerTube | 94 | š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š©š© |
š¼ Pixelfed | 42 | š§š§š§š§š§š§š§š§ |
šµ Bluesky | 14 | š„š„š„ |
š„ Reddit | 3 | š„ |
š§ Email
- Top Provider User Share: Google ā 17% ā Score: 30/30
- Top Provider Content Share: Google handles ā 17% of mail ā Score: 30/30
- Self-Hosting: Server: Easy (Can leverage hundreds of email hosting options) ā Score: 16/20
- Self-Hosting: Client: Easy (Thunderbird, K-9, etc.) ā Score: 19/20
Total: 95/100
š¹ Lemmy
- Top Provider User Share: lemmy.world ā 37% ā Score: 21.5/30
- Top Provider Content Share: lemmy.world hosts ā 37% content ā Score: 21.5/30
- Self-Hosting: Server: Easy (Docker, low resource) ā Score: 18/20
- Self-Hosting: Client: Good FOSS apps, web UI ā Score: 18/20
Total: 79/100
š Mastodon
- Top Provider User Share: mastodon.social ā 40% ā Score: 20/30
- Top Provider Content Share: mastodon.social ā 45ā50% content ā Score: 20/30
- Self-Hosting: Server: Docker setup, moderate difficulty ā Score: 15/20
- Self-Hosting: Client: Strong ecosystem (Tusky, web, etc.) ā Score: 19/20
Total: 74/100
š£ PeerTube
- Top Provider User Share: wirtube.de ā 14% ā Score: 30/30
- Top Provider Content Share: Approximately 14% ā Score: 30/30
- Self-Hosting: Server: Docker, active community, moderate resources ā Score: 16/20
- Self-Hosting: Client: Web-first UI, FOSS, some mobile options ā Score: 18/20
Total: 94/100
š¼ Pixelfed
- Top Provider User Share: pixelfed.social ā 71% ā Score: 4.5/30
- Top Provider Content Share: Approximately 71% ā Score: 4.5/30
- Self-Hosting: Server: Laravel-based, Docker available, some config needed ā Score: 15/20
- Self-Hosting: Client: Web UI, FOSS, mobile apps in progress ā Score: 18/20
Total: 42/100
šµ Bluesky
- Top Provider User Share: bsky.social ā 99% ā Score: 0/30
- Top Provider Content Share: Nearly all content on bsky.social ā Score: 0/30
- Self-Hosting: Server: PDS hosting possible but very niche and poorly documented ā Score: 4/20
- Self-Hosting: Client: Mostly official client; some 3rd party ā Score: 10/20
Total: 14/100
š Reddit
- Top Provider User Share: Reddit hosts 100% of user accounts ā Score: 0/30
- Top Provider Content Share: Reddit hosts all user-generated content ā Score: 0/30
- Self-Hosting: Server: Not self-hostable (proprietary platform) ā Score: 0/20
- Self-Hosting: Client: Some unofficial clients available ā Score: 3/20
Total: 3/100
How Scores are Calculated
š§āš¤āš§ How User/Content Share Scores Work
This measures how many users are on the largest provider (or instance).
- No provider > 20%: If no provider has more than 20%, it gets full 30 points.
- Between 20% and 80%: Anything in between is scored on a linear scale.
- > 80%: If a provider has more than 80%, it gets 0 points.
š Formula:
Score = 30 Ć (1 - (TopProviderShare - 20) / 60)
ā¦but only if TopProviderShare is between 20% and 80%.
If below 20%, full 30. If above 80%, zero.
š Example:
If one provider has 40% of all users:
ā Score = 30 Ć (1 - (40 - 20) / 60) = 30 Ć (1 - 0.43) = 17.1 points
š„ļø How Ease of Self-Hosting Scores Work
These scores measure how easy it is for individuals or communities to run their own servers or use clients.
This looks at how technically easy it is to run your own backend (e.g., email server, Mastodon server) or User Interface (e.g., web-interface or mobile-app)
- Very Easy: One-command or setup wizard, great documentation ā 18ā20 points
- Moderate: Docker or manual setup, some config, active community support ā 13ā17 points
- Hard: Complex setup, needs regular updates or custom config, poor documentation ā 6ā12 points
- Very Hard or Proprietary: Little to no self-hosting support, undocumented ā 0ā5 points
š Sources
- š§ Email
W3Techs ā Email Server Overview - š¹ Lemmy
Fedidb ā Lemmy Software Stats - š Mastodon
Fedidb ā Mastodon Software Stats - š£ PeerTube
Fedidb ā PeerTube Software Stats - š¼ Pixelfed
Fedidb ā Pixelfed Software Stats - šµ Bluesky
SoftwareMill ā Blueskyās Decentralized Architecture - š„ Reddit
Wikipedia - Reddit API Controversy
Footnotes
This is a work in progress and may contain mistakes. If you have ideas or suggestions for improvement, feel free to let me know.
Source: github.com/NoBadDays/decentralā¦
Bluesky's Decentralized Architecture Compared to Mastodon and Twitter/X
Check the factors that might make Bluesky significantly different from its competitors.Adam Warski (SoftwareMill)
like this
don't like this
The scoring system is basically there to put a number on "How free are users and hosts of a platform to move around?"
Or "How much power is in the hands of the people and not a few companies?"
For me Email scores very high in this regard.
As far as I know most Lemmy instances leverages paid-for or freemium services to have their instances work easily/properly
Then please update your category name to reflect that. Right now it says "Self-Hosting" which to the majority of readers means hosting it yourself, whatever the reason may be: privacy, configurability or just being safe from future enshittification.
As far as I know most Lemmy instances leverages paid-for or freemium services to have their instances work easily/properly
Yes but you can't compare a whole lemmy instance to an account on an email server that you share with others. The fair comparison would be hosting a lemmy instance to hosting your own email server and creating an account on Proton Mail to creating an account (or a community) on lemmy.world.
How to get people to use Mastodon?
cross-posted from: lemm.ee/post/56496251
I'd like to add to suggest a couple of things regarding Mastodon and user onboarding/retention.
The Server Selection Problem^TM^
The single biggest problem with Mastodon adoption is the fact people see talk about a server and give up. As such, servers need to be removed from the conversation and onboarding process. A server still needs to be selected for a new user, however, which raises the question: How should we select a server for a new user?The obvious solution is to simply direct users to mastodon.social, which is actually what Mastodon already does to a certain extent. The issue with this is that the Fediverse is meant to be decentralized. As such, it's counterproductive to funnel people towards a single server. This causes maintenance bottlenecks and privacy/data-protection concerns.
As such, there needs to be some sort of method that ranks servers based on a few factors in order to select the optimal server for any given user, while keeping the decentralized nature of the Fediverse in mind.
Why any server?
First, it's important to answer the question of why would any given user pick any given server.Generally speaking, the server isn't a big deal, as in, any server allows users to interact with the whole of the network in its full capacity.
All servers are Mastodon, after all.
However, there are differences. The most significant ones are, I'd say: location, uptime, and language.
A user benefits from being registered to a server that's geographically close to them, as that leads to a better connection. Additionally, servers with high uptime and stability are preferred, as users may have different times they use the server and nobody likes to try and access a server and see that it's down for any number of reasons. Finally, users need to be able to understand the language the server is in (obviously).
I believe these three factors should be at the forefront of the decision-making process for deciding what server to be suggested to any given user on sign-up.
Auto-selector
With that, comes the solution: a server auto-selector. A game I play, DCSS, actually does something similar for online play.
(I have my location turned off and there are very few servers, as you can see, so listing them is trivial.)This isn't exactly a novel scientific breakthrough, but I think it's a significant notion for helping the onboarding process for new Mastodon users.
A server auto-selector should filter servers to suggest by following these steps:
- Detect the user's system language.
- Detect the user's location.
- Calculate the server's uptime score.
- Pseudo-rank user-count.I believe the first two points are self-explanatory. Being that Mastodon (and the Fediverse, in general) stands firmly against data-harvesting, location data should probably not be mandatorily collected. It should be easy to either ask the user for some vague information or simply allow them to skip this step entirely, even if it might affect the user experience. Additionally, there's the issue that many servers don't make it known where they're hosted. Ideally, this could change to facilitate server selection for the users, but there's always the point that, if a server doesn't say where it's hosted, it gets pulled down by the algorithm, which in turn encourages divulging that kind of information; this might a problem solved by the solution, if you get my meaning.
What I mean by uptime score is simply an evaluation of the server's uptime history. For example, it's not good policy to direct users towards servers that are often unavailable, it might be disadvantageous to direct users to servers with too-frequent downtime for maintenance, and so on. As such, the server auto-selector should calculate a sort of "score" for any server that fits the first two points. I can't say how this should be calculated, exactly, but I'm sure some computer-knowers out there can come up with a less-than-terrible methodology for this.
The last point is something that I think should be taken into account as well, regarding the user-count of the servers. As I mentioned, we can't funnel users towards a single server, but another issue is that we should actually encourage user dispersion over many servers. The outlined method might already do this to a sufficient extent, but I suggest doing some sort of randomization of filtered servers based on user-count. I think it's wrong to simply plug a new user into the least-populated server around, but I do think that over-populated servers, in a relative sense, should be discouraged by the server-selector.
Worst case scenario, a random server that passes the uptime score point can be selected for any new user.
The onboarding experience
Basically, this should be as simple as possible. The more questions need to be answered, the worse.I think a simple "Join Mastodon" button is the best. Just a big blue button in the middle of the homepage.
Server selection should start as soon as the new user accesses the joinmastodon website, and clicking the button simply redirects the user to the sign-up process for that server.
I believe this approach would increase adoption of Mastodon by streamlining the server selection process, as well as help the continuous decentralization of the Fediverse.
The Feed Problem
Another significant issue with Mastodon is the feed and community/discovery aspects.Creating a new Mastodon account yields... Nothing. An empty feed!
This is absolutely terrible and ruins user retention. I've had several people tell me that this first-experience emptiness completely turned them off from Mastodon. It's not intuitive, and it needs to be corrected.
A simple solution
Mastodon does have feeds, but they're all tucked away in the Explore and Live Feeds tabs.I think the single biggest change that Mastodon can make, as far as this goes, is to shift the Explore->Posts feed to the Home tab. Just do it like Twitter or Bluesky, make the discovery feed the first thing a new user encounters.
That, by itself, should make a difference in terms of user retention.
Maybe I'm delusional and severely underestimating how doable this is, but I really believe Mastodon needs to change the way it deals with new users if we want it to actually grow into a strong social media, keyword social (it needs people).
Thoughts?
to be on Facebook but the local focus in a pretty leftie inner city area is a good idea. If people know people irl on Fedi they will maybe have an easier time.
Is KDE actually good or it is overrated? Or I was just unlucky because of prebuilt distros?
Hello folks. I use many distro from Debian to Fedora to OpenSuse and Arch. I also use many window managers like i3, dwm and qtile. On desktop environment, I use XFCE the most. Currently, I am looking to try something new, hence KDE.
I am looking for something with a beautiful UI and works out of the box. So, something on the same spectrum as XFCE but more pretty.
So, I tried out the distros with preinstalled KDE: Fedora KDE, Manjaro KDE, Kubuntu.
The good: KDE is beautiful and very easy to use. I actually enjoy using my computer more.
The bad: it crashes.. a lot even when I turn off all the animations. My system is not that slow: AMD 7 Pro with 64 GB of RAM. Some examples:
- Logging in, KDE hangs for 30 seconds. Even when I finally see the desktop, I would need to wait a further 10 seconds to finally able to interact, i.e. click and open stuff.
- After resume suspend, system would hang and there is nothing I can do except for a forced reboot.
- Browsing the web with only 3 tabs opened, KDE also hang.
As much as I hate GNOME, everything just works. I installed the GNOME flavors of above distros and never experience any hiccups.
If KDE works for you, do you use a preinstalled distro and which one? How about if you install KDE from scratch, like Arch?
KDE just works on my machine, which is lower specs than yours. I've never had it crash. I use Endeavor OS, so it came with it by default (which was part of the reason I chose it).
Edit: I don't do much tweaking of the KDE settings other than the main color scheme. I also have never had an issue with waking from sleep on Endeavor (but I recall in years past that was an issue with most distros I tried and unrelated to KDE since I was less a fan of its style back then and didn't use KDE). My set up is a normal desktop PC that I use daily for everything, including gaming.
It's an issue according to any UX pattern. If something says that it's done when it's not, it's misrepresenting the state of the action.
Hard to believe that modifying the counter to include the necessary time for actual writing to the flash drive would break everything. Target flash drives only etc.
System functioning as intended doesn't mean that it's a good UX.
Share your partition scheme!
How did you partition your disk before installing Linux?
Do you regret how you set it up?
I'm looking for some real users experiences about this and I'm trying to find the best approach for my setup.
Thank you for sharing!
fabien@debian2080ti:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.9M 3.2G 1% /run
/dev/mapper/debian2080ti--vg-root 28G 25G 1.8G 94% /
tmpfs 16G 168K 16G 1% /dev/shm
tmpfs 5.0M 24K 5.0M 1% /run/lock
/dev/nvme0n1p2 456M 222M 210M 52% /boot
/dev/nvme0n1p1 511M 5.9M 506M 2% /boot/efi
/dev/mapper/debian2080ti--vg-home 439G 390G 27G 94% /home
tmpfs 3.2G 2.6M 3.2G 1% /run/user/1000
/dev/sda3 1.7T 1.6T 62G 97% /media/fabien/a77cf81e-fb2c-44a7-99a3-6ca9f15815091
Android 16 lets the Linux Terminal use your phone's entire storage
Android 16 lets the Linux Terminal use your phone's entire storage - Android Authority
Android 16 Beta 4 uncaps the disk resizing slider, allowing you to allocate your phone's entire storage to the Linux Terminal.Mishaal Rahman (Android Authority)
It's supposed to be available on Android 15, but only on 'select devices', so probably only on Pixel.
Thanks for trying it.
Kodi on Wayland losing focus
There's a strange issue with Kodi and the focus under Wayland: after some time Kodi (running in full screen) loses focus so I cannot navigate it with the TV remote anymore, but I have to switch back to Kodi using ALT+TAB on the keyboard.
I'm not sure of when this happens, basically I power on the TV on the next day and Kodi has lost focus, the PC is always on.
Since something like this had never happened before on the old PC (running Mint), I tried switching to xorg instead of Wayland and the problem disappeared.
Desktop environment is KDE, Kodi is installed via official flatpak. No standard keyboard or mouse are connected to the PC, only a wireless keyboard with touchpad which is pratically always powered off, so it is impossible that someone is switching focus by mistake.
Does anyone have any clue?
Does anyone have any clue?
My best guess would be some relatively passive notification stealing focus.
As reddeadhead mentioned, there's specific "don't steal focus" settings. I've had good luck with them.
Any fediverse like discord clones?
I'm wondering if anyone made a fediverse like (aka multiple instances talking to eachother) for discord?
I know matrix exists, but it's only rooms instead of servers with channels, etc...
like this
don't like this
it's only rooms instead of servers with channels..
Literally the same thing but with different names. I use Matrix with Element, and it is exactly the same as Discord. Laid out the same, functionally the same (actually better since it encrypts everything), and even the UI is identical.
like this
From a chat standpoint, the two are near identical - yes - but Matrix lacks the "voice/video calls as persistent rooms" feature that Discord has. This was planned a while back, but has recently been pushed on the backburner^[1]^ as they work on Element Call.
Early on Matrix was sort of being built up as an IRC/Discord alternative, but recently they've pivoted more towards a WA/Telegram/Slack alternative as most of their financial support comes from European governments and companies looking for strong and secure internal communication solutions they can manage themselves.
So, TL;DR you probably won't see the exact Discord like features you want land in the spec any time soon as they're not being funded.
So that means, right now:
- No persistent voice/video rooms (but they are on the roadmap!)
- No push-to-talk or "game friendly" settings like voice auto-detection (also not really on the roadmap)
Having said all that, Matrix is brilliant and I highly encourage people to check it out. I use a Matrix <-> Signal bridge for most of my comms with my friends, and we voice chat on Mumble. Not ideal, but you get to avoid Discord and you get a very similar experience! Bonus points for Mumble as it's super lightweight.
~[1] It's not really on the backburner so much as it's something that will have to be worked on after the new VOIP stack - Element Call - is integrated in the wider Matrix ecosystem.~
like this
First draft woes
cross-posted from: lemmy.world/post/28546756
So Iāve completed the cosine similarity function, which means the script is now recommending videos in a raw way. Below is just a ranking of videos that match my watch history (all three are most likely videos Iāve already watched):2: {shortUUID: "saKY2TWfwNYgPUQFkE4xsi", similarity: 0.4955}
3: {shortUUID: "kk7x8GAs7gNvkzaPs6EPiU", similarity: 0.4099}
4: {shortUUID: "uXeAyVfX1WEzqSPsDxtH3p", similarity: 0.2829}Getting to this point made me realize: thereās no such thing as a simple algorithmājust simple ways to collect data. The code currently has issues with collecting data properly, so thatās something that needs fixing. Hopefully, once the data collection in this script is improved, it can be reused for future Fediverse algorithms.
There are countless ways to process the data. Cosine similarity is a simple concept and easy to implement in code, but it has a flaw: content youāve already watched tends to rank higher than anything new. So a basic "pick the highest cosine similarity" approach probably isnāt ideal. It either needs logic to remove already-watched videos, or to bias toward videos lower down in the ranking. But filtering out watched videos isnāt perfect eitherāpeople do like to rewatch things.
The algorithm currently just looks at how much time you spent watching unique segments of a video, then assigns a value in seconds to all the words in the title, description, and tags, and sums that over all videos.
The algorithm is actually okayāsubjectively, itās better than just sorting by date. I picked a few videos at random from the top 300 ranked by cosine similarity , and there was content interesting enough to watch for more than 30 seconds, and some that was just too weird for me. Here are a few examples:
- dalek.zone/w/7zifNjSwiafLnsoVVā¦
- peertube.1312.media/w/92B9U3DQ⦠(Japanese Spider-Man videoāI'm not into it)
- peertube.wtf/w/gmmi53JorcBQLEZā¦
- peertube.wtf/w/5eeLAVm5ZQBMK1Pā¦
Some of these links are across different instances because no single PeerTube instance has all the videos. I loaded metadata for over 6,000 videos across five instances during testing.
The question is: should the algorithm be scoped to a single instance (only looking at content on the userās home instance), or should it recommend from any instance and take you there?
funny thing to note is that there might be a linux pipeline in this algo
GitHub - solidheron/peertube_recomendation_algorythm: currently just a browser extension that monitors your the peertube videos your watch and stores them locally
currently just a browser extension that monitors your the peertube videos your watch and stores them locally - solidheron/peertube_recomendation_algorythmGitHub
like this
watty doesn't like this.
I think it needs to work across instances, since we're concerned wit the Fediverse and federation is one of the defining mechanics. Also when I have a look at my subscriptions, they come from a variety of instances. So I don't think a single instance feature would be of any use for me.
Sure. And with the cosine similarity, you'd obviously need to suppress already watched videos. Obviously I watched them and the algorithm knows, but I'd like it to recommend new videos to me.
Is this video a legitimate way to get Linux on LineageOS via Termux or is there a better recent method?
Please and thank you
How to install Termux X11 and set up a Linux Environment on Android (Without Root)
Disclaimer ---- šššš š§šµš² šššš¼šæš¶š®š¹ š®š»š± š±š²šŗš¼ š½šæš¼šš¶š±š²š± š¶š» š§šµš¶š ššµš®š»š»š²š¹ š¶š š³š¼šæ š¶š»š³š¼šæšŗš®šš¶š¼š»š®š¹ š®š»š± ...YouTube
news.itsfoss.com/google-androiā¦
Within developer options natively now
Good News! Google Starts Rolling Out Native Linux Terminal to Android Devices
It looks like Google is rolling out its native Linux terminal app for select Android devices.Sourav Rudra (It's FOSS News)
Updated my post I put wrong thing. Does it apply to LineageOS as well?
But thanks for that too. That will help with other devices that run regular Android
Thank you!!!
LineageOS 22.2 (on FP4) does not seem to have that option yet.
At least, it is not listed in the developer options.
You can find it if you tap on the search button within developer options (or just general settings, as that also includes results from developer options) and type "terminal" or "linux".
The (Experimental) Run Linux terminal on Android
result shows up.
But after you tap on that, you see that toggle is greyed out. Can't be enabled.
I am interested in getting that to work, so any help is appreciated.
There is hopefully some ADB command or something that forcefully enables Linux environment.
Depends what you mean by "Linux" here.
It's probably not the kernel itself, so do you mean
- a terminal e.g. a working shell where you can run commands e.g. `ls | wc -l' ?
- headless containers, e.g. services like Immich accessed elsewhere?
- a window manager e.g. KDE or Gnome?
- a software with a visual interface, or GUI, e.g. GCompris?
Based on that then one can answer if Termux is sufficient (or "legitimate") or if something else is needed.
PS: You can read some of my notes on termux on different Android devices at fabien.benetou.fr/Tools/Androiā¦
Jackett memory leak
Linux mint 22.1 and jackett is hogging 200+GB of virtual memory. I do have a couple -arrs running, and calibre server, but it seems a ludicrous amount of memory. Reading on the webs it seems people think 20GB is crazy.
Any help/thoughts where to look? Not using Docker.
Virtual memory is different from swap memory.
Swap memory is used when you run out of physical memory, so the memory is extended to your storage.
Virtual memory is an abstraction that lies between programs using memory and the physical memory in the device. It can be something like compression and memory-mapped files, like mentioned.
And yes, some swap is still useful, up to something like 4G for larger systems.
And if you want to hibernate to disk, you may need as much swap as your physical memory. But maybe thatās changed. I havenāt done that in years.
Help
So, im IP banned from lemmy.world? Or is this cloudflare or smth locking me out? How do I proceed?
I have wanted to leave .world for a while, probably in favour of dbzero, but I would still at least like to delete my account and/or download some data beforehand?
I don't think I did anything wrong, and believe it is a cloudflare thing, but how will I contact the mods, if I cant open their front page to find their emails? Anyways. Any help is appreciated.
Also, sorry if this is the wrong community, but its the only one I know that maybe can help?
Edit 1: I can access the instance if I use a VPN, but I still dont know what to do. This kinda confirms it is cloudflare, but how can I get off their "naughty list"?
Edit the last: it seems to have solved itself after some time. I just used tgis instance for a while, and now its working again.
like this
don't like this
Bluetooth speaker has no sound | Ubuntu 24.04 and Fedora 41/42
UPDATE: After hours and dozens of fixes it simple does not work. The Boss Katana Mini X seems to be completely incompatible with Linux. I'm gonna install Windows again on my Surface. W11 works like dogshit on it but at least I can use it to connect to my guitar amp.
Leaving the thread open in case a solution does eventually appear.
OP:
I'm having an issue with a BT speaker, well Guitar amp. actually. (BOSS Katana Mini X)
Device is a Microsoft Surface Pro 7.
It connects, but it wont play any sound at all. I'm now at the point where I'm considering installing W11 on that Surface again just so I can connect it to my amp to play some guitar with backing tracks and whatnot. I hate using my phone for this.
- Speaker is chosen as the output device.
- Tried to switch to PipeWire
- Installed Blueman and a Pulse Audio interface
- Also tried this on Fedora 41(GNOME)
- Bluetooth earbuds from JBL works fine and get normal sound
- I have installed the kernel for Surface devices, but I also tested this BEFORE installing that and there has been no difference on both Ubuntu and Fedora.
What I notice is there's only two configs I can chose from on the settings for the amp as an output device, instead of the long list I have on other devices. Possible cause?
pavucontrol
I have that one already, I'm gonna try the other one real quick!
like this
FabioTheNewOrder doesn't like this.
Mastodon is my go-to "shout in the void about my goings-on" platform.
Pixelfed is where I post my original photography and artwork.
Bookwyrm is for my book nerdery, mostly.
Edit: Oh and I have a Matrix account but despite the fact that I mentioned it to literally all of my friends, nobody uses it. I keep it around in case someone actually wants to send me private messages because Mastodon is kinda badly suited for that.
like this
Nanook likes this.
Non-English Keyboard Input on KDE
Has anyone successfully typed either European accented characters or Japanese Kanas on their physical keyboard?
For the longest time, I've been trying to get non-English characters to appear on my system. Specifically European accented characters. I've read about the compose key, but I could never make it work somehow.
I've also tried to make the Kanas to appear using the Japanese keyboard, but that too doesn't work.
I'm using mostly KDE system, on many different distros. As for the keyboard, it's almost always standard US QWERTY without the numpad, varying between various laptops (mostly Thinkpads) and USB keyboards. For the Japanese, it's a Thinkpad W530 (should also apply to X230, T430, and T530).
I've been using Linux for quite a while now. I'm familiar with most inner working of the system, but this the one thing I can never wrap my head around!
Has anyone successfully typed either European accented characters or Japanese Kanas on their physical keyboard?
For the Latin extended characters, I've used AltGr, Compose, probably at some point the GTK control-shift-u thing. I've also used various emacs text input methods to do so. I don't speak Japanese.
I don't use KDE, but it looks like you can set it up to bind Compose at a per-user level once you've logged into your account.
userbase.kde.org/Tutorials/Comā¦
EDIT: "Motƶrhead" --- that was typed using the Compose key, which on this laptop I have replacing the Right Alt key. On this system, which is Debian, I do it systemwide by editing /etc/default/keyboard
, and adding:
XKBOPTIONS="compose:ralt,terminate:ctrl_alt_bksp,ctrl:swapcaps"
That swaps Left Control and Caps Lock, sets Right Alt to be Compose and...hmm, actually, I should check whether Control-Alt-Backspace still functions to kill Wayland, or if that stopped working when I moved off X11.
Then I ran # dpkg-reconfigure keyboard-configuration
.
But if you're on a non-Debian-based distro, things may work differently.
BotKit 0.2.0 released
BotKit 0.2.0 Ā· fedify-dev botkit Ā· Discussion #4
We're pleased to announce the release of BotKit 0.2.0! For those new to our project, BotKit is a TypeScript framework for creating standalone ActivityPub bots that can interact with Mastodon, Missk...GitHub
like this
don't like this
like this
blargle doesn't like this.
Is there an easy way to filter all terminal commands that contain a --help flag?
| bat --language=help
by default for the syntax highlighting and colored output... Or if you know a lower effort way to color the output of --help let me know.
There's no particularly smart way to accomplish this in the exact way that you want. I don't like the solution which searches your $PATH because now you're adding latency to search your entire $PATH for every command to add this functionality. It's a singularly better solution to tell the CLI what you want versus the CLI attempting (using logic) to figure it out.
The easiest solution here is to create your own command which calls the target application with --help
;
\#!/bin/bash
$1 --help | bat --language=help
Then run it;
$ script_name docker
and it will run
docker --help | bat --language=help
. If you use this solution a lot you can try to use bash function which you call at the end of commands if they error;helpfunc() {
$1 --help | bat --language=help
}
trap 'helpfunc' ERR
But now you have to run logic to truncate previous commands to only return the first word of a command from history and it becomes a real PITA...
cd into the directory you want to test commands.
bash
for i in ls
do
$i --help >/dev/null 2>&1 && echo $i
done
any command which honors a --help will get listed. I suppose you could add a >> /tmp/output after the echo $i to log it.
run it in /bin, /usr/bin, usr/local/bin
Is there a federated Strava alternative?
Strava is an absolute nightmare to use. My feed is absolutely chock full of ads and dog-walkers. Don't get me wrong, I'm very happy they're taking a 0.2 mile walk around their block and logging their progress, but I don't need to see it. Nike, TrainerRoad, Zwift, Peloton all have giant ads every time their users upload an activity. And I don't understand it because it's not an ad-supported network. Like I would happily pay to have all this shit hidden. It would be extremely simple for Strava to fix this, which would just be to provide me with a simple filter for what type of activities I'd like to see. The fact that they haven't done so, a long time ago, leads me to believe that they simply don't want to, for whatever reason. Plus they've already begun to enshittify by breaking integrations with third parties.
Are there any good options for this?
E: to be clear, I'm asking about the social aspect of Strava.
like this
don't like this
...yes, exactly. The brand of tool used is not relevant to the activity. Users are not adding those photos, they're added by Peloton. Not to mention the activity names.
You don't see Garmin or Wahoo adding in giant ads for their products when syncing rides.
CachyOS vs arch
Sensitive content
Less packages really doesn't mean much in terms of how easy the system will be to manage. If anything, I'd say a distro with more, but pre installed packages is easier to manage because the maintainers will make sure that those packages will be as easy to work with and upgrade as possible.
That said, I'm definitely not going to stop you from trying Arch though. You can even get similar (or better) optimizations by using the ALHP repos and a kernel like linux-tkg or linux-cachyos for example, although the difference really is negligible in most cases.
One of the reasons I'm leaning towards arch is because of its minimal approach, so that I can install only what I need.
Does anyone know of any MipsLE/Mips64LE systems in the wild?
And now that I have made the post I got some search results:
1. Routers apparently still use MIPS
2. linuxdoc.org/HOWTO/MIPS-HOWTO-⦠has some MIPS systems you can run Linux on.
If you want to share anything about MIPS though please feel free to comment, I would interest me greatly what the rest of you have to say.
Linux equivalents of SketchyVim, for vim modal editing in any text box?
GitHub - FelixKratz/SketchyVim: Adds all vim moves and modes to macOS text fields
Adds all vim moves and modes to macOS text fields. Contribute to FelixKratz/SketchyVim development by creating an account on GitHub.GitHub
If I remember correctly, Wayland forbids to listen to keyboard inputs for security reasons. Each software receives its input, but there is no global listening.
So, in my understanding, this could mean that it's not possible.
Need advice on a one pc home
Hi, so I want to building a pc for a home server (?) or NAS. I dont really know whats the most appropriate term but what I intend to build is a one pc for my household. currently my requirement is one work 'pc' capable of heavy 3d modeling one light work pc. two 4k gaming tvs. (they most likely wont be used at the same time)
my knowledge of technical stuff is bretty basic so please be patient with me.
before, i used my steam deck to stream my work pc using parsec but i thought i just want to jump all in on linux and using vm to use more niche 3d softwares.
my budget is flexible as long as i dont need to use enterprise hardware. also i heard nvidia is not good for linux so i'd like to confirm if that is still the case as im thinking of using 5090 if not, i hope amd releases an equivalent capable card or if any according my quick research suggest.
as for linux, the only distro (?) i ever used is the steam deck one and i love it. im not a programmer or even remotely capable one so i'd like to avoid anything that has to be manually typing commands at terminal but im open to surface level tinkering.
thank you for your time
It's actually very simple:
monitors-on:
#! /bin/bash
hyprctl keyword monitor DP-1, 2560x1440@144, 0x0, 1
hyprctl keyword monitor DP-3, 2560x1440@144, 2560x0, 1
hyprctl keyword monitor HDMI-A-1, disable
monitors-off is basically same thing but reversed:
#! /bin/bash
hyprctl keyword monitor DP-1, disable
hyprctl keyword monitor DP-3, disable
hyprctl keyword monitor HDMI-A-1, 0x0@60, 1
es-de
I'm still working out some kinks with audio so I don't wanna go down the rabbit hole hell that is pactl and pavucontrol in this post. But that's more of a universal Linux gripe I have than distro specific.
Obviously you'll need to tweak the script to what your specific setup is. The first numbers are x & y axis and the second is refresh rate. This is just an example. It's also Wayland only but you can do this in x11 no problem
As far as "remotely" switching, I just assigned the scripts to keybinds in the hyprland config file. Super easy.
How to use Java in Flatpak VSCodium [TUTORIAL]
After hours of trying understand how to set up VSCodium with Java extension, i found a solution so here it is, idiotproof (i hope) tutorial for future me and others like me ;)
Flatpak VSCodium with java extension
Via Terminal
- Install VSCodium:
flatpak install com.vscodium.codium
- Install "Extension Pack for Java" extension for VSCodium:
flatpak run com.vscodium.codium --install-extension vscjava.vscode-java-pack
- Install flatpak openjdk extension. (In this case openjdk21):
flatpak install flathub org.freedesktop.Sdk.Extension.openjdk21
- Add two new environment to use flatpak openjdk extension in VSCodium:
flatpak override --user --env=JAVA_HOME=/usr/lib/sdk/openjdk21 com.vscodium.codium && flatpak override --user --env=PATH=/usr/lib/sdk/openjdk21/bin:/app/bin:/usr/bin com.vscodium.codium
- Restart VSCodium:
flatpak kill com.vscodium.codium && flatpak run com.vscodium.codium
- Done.
Via Graphical interface
- Install "VSCodium":
- Go to app store and search for "VSCodium".
- Make sure it's flatpak versionn.
- Click
Install
button and after downloading open the app.
- Install "Extension Pack for Java" extension in VSCodium:
- Go to
Extensions
pannel (on the left). - Search for "Extension Pack for Java".
- Click
Install
button. - Close "VSCodium".
- Go to
- Install flatpak openjdk extension. (In this case openjdk21):
- Search for "Terminal" app and open it.
- Paste command below:
flatpak install flathub org.freedesktop.Sdk.Extension.openjdk21
3. Click `Enter`.
4. Close "Terminal".
4. Install "Flatseal":
1. Go to app store and search for "Flatseal".
2. Click
Install
button and after downloading open the app.5. Allow VSCodium to use flatpak openjdk extension:
1. Search for "VSCodium" in Flatseal.
2. Go to Environment.
3. Click
+
button (to the right from Variables) and paste:PATH=/usr/lib/sdk/openjdk21/bin:/app/bin:/usr/bin
4. Click `+` once again and paste:
JAVA_HOME=/usr/lib/sdk/openjdk21
- Restart VSCodium
PS
There is formatting issue with markdown but it's on lemmy side i think
This is overly complicated. Just install Java then run
flatpak --user override --env="FLATPAK_ENABLE_SDK_EXT=openjdk" com.vscodium.codium
Note this works for all other SDKs too. It works especially well for programming languages like Rust that have their own package manager.
Doesn't work so well for languages like C/C++ where you use your distro package manager to install dependencies. In those cases it's easier to install VSCodium inside a container where you do have access to a distro package manager.
profiles to block
new "internet chick" profile that harrases the community:
@ etchant67 @ modalnode.club
creating a USB gadget
I want to create a USB gadget with a raspberry pi zero 2W. I'm starting with imitating a webcam I already have to see how much of this I can figure out. I've used the online documentation and a couple AI bots to get this far quickly, but I'm hung up on a ln command. It's telling me "ln: failed to create symbolic link 'configs/c.1/uvc.usb0': No such file or directory" when trying to create the link. This makes no sense to me though. I'm trying to create the link, of course it doesn't exist yet. That's what that command is supposed to do.
I've confirmed this problem in alpine linux and raspbian lite.
Below is the little script I have so far just to create the device:
\#!/bin/bash
modprobe libcomposite
cd /sys/kernel/config/usb_gadget/
mkdir -p fauxcam
cd fauxcam
echo 0x046d > idVendor # Logitech Vendor ID
echo 0x094b > idProduct # Brio 105 Product ID
echo 0x0200 > bcdUSB
echo 0x9914 > bcdDevice
mkdir -p strings/0x409
echo "111111111111" > strings/0x409/serialnumber
echo "Brio 105" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "UVC Configuration" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
mkdir -p functions/uvc.usb0
ln -s functions/uvc.usb0 configs/c.1/
echo "usb0" > UDC
Edit: OK, I looked at the docs, and they sure do make a broken symlink there. I still think it's worth a try to create a non-broken link, maybe the docs are wrong. I would expect they would put a little note there, that yes, you really do want to create a broken symlink (if so, why not a regular file?), but then again its kernel docs and those aren't the most friendly.
I also thought you were OP for some reason, sorry.
Edit2: If you look at the file listing later in the docs, you can see this:
./configs/c.1/ncm.usb0 -> ../../../../usb_gadget/g1/functions/ncm.usb0
Which does look like a real non-broken symlink, so I maintain the docs are wrong and you're not supposed to make a broken symlink.
Original comment, silightly edited:
You misunderstand. I suspect OP cannot create the symlink, because it would be a broken symlink, not because the symlink is relative. Maybe you cannot create broken symlinks in the sysfs for some reason.
I was just trying to explain that a relative symlink is relative to the directory in which it resides. The target to the symlink should point to ../../functions/uvc.usb0
if you want it to point to something that exists. The ln command in OP's listing would result in a broken symlink, since the specified path is not relative to the c.1 directory. It is relative to the working directory, but that's wrong, that's not what ln expects you to put there.
Maybe it needs to be a correct symlink, maybe that will solve the problem.
TIL Kitty terminal can show a dock panel on Linux desktops!
Draw a GPU accelerated dock panel on your desktop
You can use this kitten to draw a GPU accelerated panel on the edge of your screen or as the desktop wallpaper, that shows the output from an arbitrary terminal program. It is useful for showing st...kitty
Edit:
Here's a link to the thread
github.com/kovidgoyal/kitty/isā¦
There is already a terminal emulator named KiTTY. How will you differentiate your emulator from it? Ā· Issue #9 Ā· kovidgoyal/kitty
KiTTY is a fork of PuTTY - you can find it at http://kitty.9bis.net/ You may want to differentiate your product from KiTTY so that google knows the difference...GitHub
Can't update to Fedora Silverblue 42
I am trying to update from Silverblue 41 to 42 (fully updated) but run into issues when attempting to update from both the software app and from CLI.
The problem using the software app is the same as what is described by this other user, who is using Fedora Workstation not Silverblue like I am:
discussion.fedoraproject.org/tā¦
When I click the download button, it looks like it's downloading multiple files since the progress bar goes from 0 to 100 several times, and then it gets up to 95% then suddenly returns to the download button. This happens in about 30 seconds.
Using the CLI method, I run the following command:
rpm-ostree rebase fedora:fedora/42/x86_64/silverblue
and get the following errors:
error: Could not depsolve transaction; 1 problem detected: Problem: conflicting requests - package dnf5-plugin-automatic-5.2.12.0-2.fc42.x86_64 from updates requires libcurl-full(x86-64), but none of the providers can be installed - package dnf5-plugin-automatic-5.2.12.0-1.fc42.x86_64 from fedora requires libcurl-full(x86-64), but none of the providers can be installed - package dnf5-plugin-automatic-5.2.12.0-2.fc42.x86_64 from updates-archive requires libcurl-full(x86-64), but none of the providers can be installed - package libcurl-minimal-8.11.1-4.fc42.x86_64 from @System conflicts with libcurl(x86-64) provided by libcurl-8.11.1-4.fc42.x86_64 from fedora
SOLUTION: Uninstalled layered packages in dnf-automatic, libreoffice, and rpmfusion and then restarted. Rebase command successfully completed thereafter.
Update to Fedora 42 fails in gnome-software
Most of my machines seem to be working fine, but I have one critical one where I have the notification for Fedora 42 in the gnome-software updates window, but when I click the Download button, the progress bar shows up to about 4% and then quits withā¦Fedora Discussion
It worked! I uninstalled dnf-automatic, libreoffice, and rpmfusion and then restarted.
Thanks for your help! Will keep your tips in mind for the future and try to avoid layering.
Repartition again plus Printer
Hey again.
Thank you again for all of the help with the dual boot and repartition a few weeks back. I am running Linux Mint.
I repartitioned the Linux side to about 25 GB and over the last few weeks just downloading updates,
I guess it has filled up. It tells me there is only 75 MB left. Is that normal or can I free up room again?
Also, the printer no longer prints. It just hangs when I try to print. It shows up correctly as the HP Deskjet 3510 but wonāt print. Any tips on how to fix?
TIL Kitty terminal can show a dock panel on Linux desktops!
Draw a GPU accelerated dock panel on your desktop
You can use this kitten to draw a GPU accelerated panel on the edge of your screen or as the desktop wallpaper, that shows the output from an arbitrary terminal program. It is useful for showing st...kitty
urxvt
(for X11) or foot
(for Wayland) ranked highly there.
The Social Network That Can't Sell Out: Understanding Mastodon vs. Bluesky
The Social Network That Can't Sell Out: Understanding Mastodon vs. Bluesky - phillipjreese.com
As users flee from Twitter/X, two visions of social media's future compete: Mastodon's community-controlled network versus Bluesky's venture-backed promises.phillipjreese@gmail.com (phillipjreese.com)
like this
don't like this
Bluesky won because it's centralized, and people don't have to decide over instance.
like this
supersquirrel doesn't like this.
Why are you on Lemmy? Or, why do you think the decentralised model works here, but not on mastodon?
Or is it only working because there is no third party VC-backed reddit clone?
like this
don't like this
As a user of both Mastodon and Lemmy, I think there are inherit differences between the formats that make Lemmy easily a capable replacement for Reddit, but Mastodon not at all a replacement for Twitter.
To get into specifics, Lemmy is more meme and news based, and as long as there are a few thousand users using it and some percentage of those posting content...it largely scratches the same itch.
Twitter was very much an active global conversation forum. It was nicknamed the hell site for a reason because if someone took issue with or was very amused by something you posted and you became "the main character" of Twitter for even an instant (something I experienced only very slightly) it was electrifying and even sort of scary at times.
In addition, the people that were active on there were very active, and it felt at times like you could talk to anyone who had been twitterized...which was a lot of people including prominent politicians, celebrities, and even experts of certain fields.
It was just an entirely different thing altogether. Mastodon is like many of the Twitter alternatives that have popped up from time to time. It's largely kinda the same with regards to functionality (though not having quote tweets is completely ridiculous IMO) but the engagement of it is very low, and the place largely feels very inactive. It feels like you're talking to dead feeds posted in syndication and there's nobody on the other end.
It's not the same as Twitter, and I doubt that Bluesky will even be the same as Twitter. Honestly, maybe all of that's a good thing. But the virality and the engagement and the discovery and everything on Mastodon is way turned down versus Twitter. Twitter was like the crack cocaine of social media...fast, cheap, addictive, and terrible for you. Mastodon is like a cup of tea by comparison.
like this
I've been on it for a few years now.
It's different from Twitter and that's fine. I have no real drive to join bluesky to see if it's similar because Twitter felt deeply unhealthy anyway. Crack cocaine isn't good for you.
Nobody needs to know about the existence of, for instance, "bean dad".
Jayjader likes this.
Lemmy is barely a thing. Lets not get ahead of ourselves.
People do prefer centralized platforms with shiny front-faces and easy-to-navigate corporate bullshit. The reason why that stuff is so successful is because it works.
People fled to Bluesky because advertisers moved to Bluesky.
I think it's "the algorithm", people basically just want to be force-fed "content" ā look how successful TikTok is, largely because it has an algorithm that very quickly narrows down user habits and provides endless distraction.
Mastodon and fediverse alternatives by comparison have very simple feeds and ways to surface content, it simply doesn't "hook" people the same way, and that's competition.
On one hand we should probably be doing away with "the algorithm" for reasons not enumerated here for brevity, but on the other hand maybe the fediverse should build something to accommodate this demand, otherwise the non-fedi sites will.
like this
Muyal_Hix doesn't like this.
[SOLVED] Installing Linux distro without breaking Windows install
Solution:
When I formatted all my drives to install Linux on one and Windows on the other, I kept both connected and they share EFI boot partition as a result. Every time I reinstall Linux it formats the drive and therefore deletes the Windows's EFI Boot as well. One way is to fix this is to reinstall Windows while disconnecting the drive you have Linux on. Or you can move the boot files if you don't want to do that.
I used this guide:
forums.tomshardware.com/threadā¦
OP:
Currently dual booting as I need Windows for a few tasks and ganes Linux just wonāt do. Since setting everything up Iāve reinstalled Linux twice, both times Iāve lost the ability to boot into windows and have needed to reinstall it.
Disk doesnāt show at all in Grub, tried all kinds of things but it just doesnāt show as a bootable OS. It doesnāt show in the boot options in the BIOS or the boot menu for my motherboard. Drive shows up and all the files are still on it. So my guess is the Windows bootloader somehow installs on the same disk that I have Linux on.
I run Linux(Fedora) and Windows on two separate drives.
Windows take forever to install. Anything I can do now to prevent this from happening if I need to reinstall Linux or if I wanna to some distro hopping?
Just to be clear, everything is working right now. But I want to prevent having to reinstall Windows every time I change distro or reinstall my Linux OS
[SOLVED] - Changing Windows boot manager drive
I've recently installed Windows 10 on my new m.2 but when I unplugged my old HDD to perform an easier format on a different HDD, I only reached BIOS as the m.2 does have the OS on it but not the...Tom's Hardware Forum
This method shouldnāt have anything to do with what distro youāre gonna be using as the fix itself happens in Windows.
Itās a Windows fix relevant for dual booting Linux.
Edit: I used this exact method when I had two Windows installs on different drives and wanted to remove the original one from my system. Back in the Windows 7 days.
USB formatting on Bazzite (SOLVED)
Hello ladies and gentlemen as a brand new user (installed yesterday on new computer) of Linux in general and Bazzite specifically. I had a bootable USB I was going to use for a different distro before I decided on using Bazzite with another USB.
I decided to use the first one to move my meme collection to the new computer but when I deleted the partition and reallocated it with the highlighted option (the one that is not ms-dos I can not remember the specific name) the drive now seems to have disappeared. When I plug it in now it does not auto detect anything and for the life of me I can not find any drives through Dolphin.
If anyone can tell me how I fucked it up and/or how to find it/ fix it I would be grateful. I can always do it in Windows since I have to set up the old one to access the memes anyway but I would like to know how to do it here for the future. Thanks in advance.
For me, it's going to be Fediverse or nothing
So Iāve tried Mastodon, Pixelfed and didnāt like them. Mastodon is nice if you wanna ātweetā, but thatās not for me. Pixelfed was dead.
I quit Meta because of tech bro fascism, and hated Twitter even before it was X because, letās face it - nobody has ever changed their opinion on anything because of a Twitter conversation (I know Iām exaggerating, to get my point across). I was in Reddit for a few weeks, and the conversations there seem mostly friendly and constructive, but I decided I donāt want to have anything to do with social media corporations. Besides, I noticed I could scroll endlessly. And thatās not good for me.
Lemmy seems nice. There are still some topics Iām interested in that donāt have active communities, and Iām still learning on how to have my feed from multiple instances. But still, this is the way to go for me.
Against algorithms, against fascism, for free internet. Thanks for coming to my boring Ted talk and have a nice day.
like this
don't like this
Kreza likes this.
The problem with Matrix to me is that it is simply too unstable. I can open it up on any device and half the messages won't load or are corrupted. Media won't show at all. In contrast Lemmy has been super reliable and "just works", so going from reddit to lemmy was no problem at all. And the communities are great too.
I just want working voice chat and group chats.
The entire streaming i don't really care about. There are other apps for that.
But yeah as it is I'm probably better off using discord until the enshittification is so bad no one wants to use it anymore.
Weird stuttering on fresh Fedora 42 GNOME install
EDIT:
This has worked, thanks for help:LD_PRELOAD="" VK_LOADER_LAYERS_ENABLE=VK_LAYER_MANGOHUD_overlay_x86_64 %command% --skip-launcher --vulkan
Hi, so I've been using Fedora 41 GNOME since release with no issues at all and I've decided to do a new fresh install of Fedora 42 yesterday.
Everything seemed to run well but I've encountered this issue in games that after around 30min I get this weird stutter. Until then everything runs smoothly.
As you can see in the video the stutter only occurs during mouse movement or during camera movement with keyboard. Once the camera moves on it's own and just tracks the character the frametimes are perfectly flat so it does not seem like the fault is on the game but somethings off with the system compositor?
This happens with or without VSync, I've tried with and without VRR, I've tried chaning game settings and also different Proton versions... only thing that helps is to restart the game but then I'll have to do it once again in about 30min.
My suspicion is on the new triple buffering in new GNOME 48 but I have no idea how to turn it off to test.
Any suggestions?
Is this launching games through Steam? I had a similar issue launching games through Steam using gamescope and had to set some launch options. Unfortunately I am at work and can't remember what those launch options are but when I get back home I will add them.
Edit launch options:
LD_PRELOAD="" gamescope -ef -W 3840 -H 2160 -r 144 --hdr-enabled --adaptive-sync --mangoapp -- gamemoderun %command%
As others have mentioned I think it was the "LD_PRELOAD=" that actually fixed the issue
Of course you have stuttering if you use that in the screenshot as your desktop.
Wait, wasn't there a real desktop environment that uses a game engine for rendering?
[Solved, sort of] Keyboard doesn't work after logging in. Fedora
Update: Issue disappeared without doing anything. After just letting my computer sit turned off for a few hours I started it back up to troubleshoot. Now it works again. Something happened to break it and then to unfuck it again without any input from me. Something is unstable and Iām gonna try to figure it out.
Started my PC up today, logged in like normal, but my keyboard wont work after logging in.
Except for the calculator button. None of the keys will actually do anything. But logging in works normally.
Worked fine last night, no updates have run or anything. Where to start diagnosing this? In a way where I wonāt need a keyboard?
Fedora 42 KDE
Edit: Keyboard works fine in a live environment on the USB I used to install yesterday. Tried a different keyboard on my main install, and that didnāt work either. So itās not the keyboard itself at least
Already checked the i put and that looks to be right.
No slow keys enabled in settings and no response holding down keys for up to 15 seconds.
My keyboard can both use a dongle and BT. But I canāt find it on BT. Other keyboard is the same model, which isnāt ideal but it was worth testing out.
Also, mouse still works. And Iām logged into here on my browser so I can copy commands and stuff.
Tried turning stuff on, then off, then reset to default. Nothing.
Nothing, except function keys for volume etc. and the calculator keys work when Iām logged in. I can log out and write my password normally.
I can get to a console from the login screen, which tells me to login. But I get incorrect credentials even though they are correct.
Decentralization Scoring System
š§® Decentralization Scoring System (v1.0)
This scoring system evaluates how decentralized and self-hostable a platform is, based on four core metrics.
š Scoring Metrics (Total: 100 Points)
Top Provider User Share (30 points): Measures how many users are on the largest instance. Full points if <10%; 0 if >80%.
Top Provider Content Share (30 points): Measures how much content is hosted by the largest instance. Full points if <10%; 0 if >80%.
Ease of Self-Hosting: Server (20 points): Technical ease of running your own backend. Full points for Docker/simple setup with good docs.
Ease of Self-Hosting: User Interface (20 points): Availability and usability of clients. Full points for accessible, FOSS, multi-platform clients.
š Example Breakdown (Estimates)
š§ Email (2025)
- Top Provider User Share: Apple ā 53.67% ā Score: 4.5/30
- Top Provider Content Share: Apple likely handles >50% of mail ā Score: 4.5/30
- Self-Hosting: Server: Easy (Leverage email hosting services) ā Score: 18/20
- Self-Hosting: Client: Easy (Thunderbird, K-9, etc.) ā Score: 18/20
Total: 45/100
š¹ Lemmy (2025)
- Top Provider User Share: lemmy.world ā 37.17% ā Score: 12/30
- Top Provider Content Share: lemmy.world likely hosts ~37% content ā Score: 12/30
- Self-Hosting: Server: Easy (Docker, low resource) ā Score: 18/20
- Self-Hosting: Client: Good FOSS apps, web UI ā Score: 18/20
Total: 60/100
š Mastodon (2025)
- Top Provider User Share: mastodon.social ā 42.7% ā Score: 11/30
- Top Provider Content Share: mastodon.social ā 45ā50% content ā Score: 10/30
- Self-Hosting: Server: Docker setup, moderate difficulty ā Score: 15/20
- Self-Hosting: Client: Strong ecosystem (Tusky, web, etc.) ā Score: 19/20
Total: 55/100
šµ Bluesky (2025)
- Top Provider User Share: bsky.social ā ~90%+ (very centralized) ā Score: 0/30
- Top Provider Content Share: Nearly all content on bsky.social ā Score: 0/30
- Self-Hosting: Server: PDS hosting possible but very niche ā Score: 4/20
- Self-Hosting: Client: Mostly official client; some 3rd party ā Score: 10/20
Total: 14/100
š„ Reddit (2025)
- Top Provider User Share: Reddit ā 48.4% ā Score: 0/30
- Top Provider Content Share: Reddit hosts a significant portion of user-generated content ā Score: 0/30
- Self-Hosting: Server: Not self-hostable (proprietary platform) ā Score: 0/20
- Self-Hosting: Client: Some unofficial clients available ā Score: 3/20
Total: 3/100
How Scores are Calculated
š§āš¤āš§ How User/Content Share Scores Work
This measures how many users are on the largest provider (or instance).
- 100% (one provider): If one provider has all the users, it gets 0 points.
- No provider > 10%: If no provider has more than 10%, it gets full 30 points.
- Between 10% and 80%: Anything in between is scored on a linear scale.
- > 80%: If a provider has more than 80%, it gets 0 points.
š Formula:
Score = 30 Ć (1 - (TopProviderShare - 10%) / 70%)
ā¦but only if TopProviderShare is between 10% and 80%.
If below 10%, full 30. If above 80%, zero.
š Example:
If one provider has 40% of all users:
ā Score = 30 Ć (1 - (40 - 10) / 70) = 30 Ć (1 - 0.43) = 17.1 points
š„ļø How Ease of Self-Hosting Scores Work
These scores measure how easy it is for individuals or communities to run their own servers or use clients.
This looks at how technically easy it is to run your own backend (e.g., email server, Mastodon server) or User Interface (e.g., web-interface or mobile-app)
- Very Easy: One-command Docker, low resources, great documentation ā 18ā20 points
- Moderate: Docker or manual setup, some config, active community support ā 13ā17 points
- Hard: Complex setup, needs regular updates or custom config (e.g. DNS, spam) ā 6ā12 points
- Very Hard or Proprietary: Little to no self-hosting support, undocumented ā 0ā5 points
PS.
This is Version 1.0 so there are likely flaws and mistakes in it, feel free to help create the best version we can I've put it on github.com/NoBadDays/decentralā¦
decentralization-score/decentralization_score_2025.04.md at main Ā· NoBadDays/decentralization-score
A scoring system to measure how decentralised a service is. - NoBadDays/decentralization-scoreGitHub
like this
don't like this
like this
Based on my brief searches yes, but I haven't looked into the example data in great detail.
If you have a good data point for me I can update the examples.
like this
don't like this
like this
like this
[SOLVED] Power Profile not working on Arch with KDE. Tried everything.
My laptop does support this feature since it was working on Fedora KDE. But jumping over to arch, it seems not to work at all.
1. power-profiles-daemon.service
is enabled and running.
ā power-profiles-daemon.service - Power Profiles daemon
Loaded: loaded (/usr/lib/systemd/system/power-profiles-daemon.service; enabled; preset: disabled)
Active: active (running) since <time>; 12min ago
Invocation: 4f20b3d144584a759b4a6c5ea14aa739
Main PID: 608 (power-profiles-)
Tasks: 4 (limit: 6850)
Memory: 1.6M (peak: 2.8M)
CPU: 81ms
CGroup: /system.slice/power-profiles-daemon.service
āā608 /usr/lib/power-profiles-daemon
Apr 18 11:14:52 berserk-arch systemd[1]: Starting Power Profiles daemon...
Apr 18 11:14:52 berserk-arch systemd[1]: Started Power Profiles daemon.
2. plasma-powerdevil.service
is static and running.
ā plasma-powerdevil.service - Powerdevil
Loaded: loaded (/usr/lib/systemd/user/plasma-powerdevil.service; static)
Active: active (running) since <time>; 12min ago
Invocation: 7d72f24a0e5e4a74889a3895b91eb51c
Main PID: 1074 (org_kde_powerde)
Tasks: 9 (limit: 6850)
Memory: 10.6M (peak: 11.4M)
CPU: 1.391s
CGroup: /user.slice/user-1000.slice/user@1000.service/background.slice/plasma-powerdevil.service
āā1074 /usr/lib/org_kde_powerdevil
3. upower.service
is enabled and running.
ā upower.service - Daemon for power management
Loaded: loaded (/usr/lib/systemd/system/upower.service; enabled; preset: disabled)
Active: active (running) since <time>; 12min ago
Invocation: 7aa43a43146346e383c961ce12cc9ded
Docs: man:upowerd(8)
Main PID: 540 (upowerd)
Tasks: 4 (limit: 6850)
Memory: 5.1M (peak: 5.9M)
CPU: 251ms
CGroup: /system.slice/upower.service
āā540 /usr/lib/upowerd
I've already tried to to put
GRUB_CMDLINE_LINUX_DEFAULT="amd_pstate=active"
as a kernel argument that doesn't seem to do anything as well. I can't figure it out. The power management settings work tho. Any idea what's wrong? Thanks.
TuneD is really cool, but a weird fix for that problem.
The lack of TuneD is one of the few things keeping me on Fedora and away from NixOS
Canonical Releases Ubuntu 25.04 Plucky Puffin | Canonical
The latest interim release of Ubuntu introduces ādevpacksā for popular frameworks like Spring, along with performance enhancements across a broad range of hardware. 17 April 2025 Today Canonical announced the release of Ubuntu 25.04, codenamed āPlucky Puf
The latest interim release of Ubuntu introduces ādevpacksā for popular frameworks like Spring, along with performance enhancements across a broad range of hardware. 17 April 2025 Today Canonical announced the release of Ubuntu 25.Canonical (Ubuntu)
Snaps promise to do some really cool things. They just are a bitch to use and they are slow and tied too heavily to canonical.
Werenāt you supposed to be able to snap in and out a kernel by now? Like, not even needing a reboot?
Why do you use the distro you use?
Title is quite self-explanatory, reason I wonder is because every now and then I think to myself "maybe distro X is good, maybe I should try it at some point", but then I think a bit more and realise it kind of doesn't make a difference - the only thing I feel kinda matters is rolling vs non-rolling release patterns.
My guiding principles when choosing distro are that I run arch on my desktop because it's what I'm used to (and AUR is nice to have), and Debian on servers because some people said it's good and I the non-rolling release gives me peace of mind that I don't have to update very often. But I could switch both of these out and I really don't think it would make a difference at all.
Is the restoration method mentioned here really only achievable via nixos? How can you be so confident that you are truly reobtaining an "exact same system"?
Nixos consistently intrigues me because of what it seems to be accomplishing but I can never dive in because there seems to also be many warnings about the investment required and the potential for other more complicated and really nuanced drawbacks to arise.
Give it to me straight--is it offering a new approach of stability with the emphasis on reproducibility? If I'm a gentoo enjoyer hardset in my ways, what could I stand to gain in the nixos/guix realm?
Your personal files e.g. ~/Documents are not recreated, you'll still need backups of those.
caveats are you've got to use:
- home-manager to generate your dotfiles.
- something akin to sops to generate and securely store your private keys and secrets.
But all this can be written in the one flake, so yes nixos-install --flake <GIT URL>#<HOSTNAME>
Is sufficient for me to rebuild my desktop, laptop or server from the same repository.
I've never used Gentoo, and I'm sure there are other methods of achieving the same level of reproducibility but I don't know what they are.
Nixos can be as modifiable as Gentoo with the caveat being it's a massive pain in the ass to do some things. I have a flake for making aarch64-musl systems which has been an endeavour, and... It works? I have a running system that works on 2 different SoCs. I do have to compile everything quite often though.
There are efforts to recreate Nixos without systemd, but that's a huge effort; because it's very "infrastructure as code", you have to change a lot of code where editing a build script would've sufficed on arch/Gentoo.
As for nix vs guix, guix was described to me as "if you only ever want to write in scheme", whereas nix feels much more like a means to an end with practical compromises spattered throughout.
Hi, I need in-person help from a computer whiz-can travel
Hello!
So, I live on a bus. We travel around, it's pretty great. I don't have a laptop or a mailing address that works, so getting certain things done is difficult, and I have two things I need help with.
I installed a solar system a while back, with an older charge controller a friend recommended. I more recently upgraded the batteries to lithium irons. So now this controller requires reprogramming, and to do so you have to plug an RJ45 (pretty sure that's the name) into it, and probably download some shitty chinese spyware program to fiddle with it. Their newer models bluetooth and require an app of course.
The other thing is either much trickier or impossible, and while I've booted up dumpstered laptops with thumbdrive linux before (and found the homemade blowjob video, heh) I've no idea how to even go about fiddling with this.
It's a (shitty chinese) dash/backup/security camera system. It's been referred to as a 'pizza box' system by someone who hates money. It might have a wifi chip onboard, but I can't figure out it does or not.
I'd like to flash it to run linux, if possible, and put some actually useable video monitoring/porting/editing maybe programs on there. The current UI is unusable even when it's cooperating. Like if there were an accident, I'd just basically be bluffing. Sure the data's probably there, but it's in a format that won't register on any device I've plugged the SD card into. I need it to export to filetype I can use with an ipad, which is the only computer we have aboard.
If any of this sounds like a fun or interesting challenge, I can throw some dollars at you. Or trade work! We do auto/diesel/bicycle mechanic work, welding, sewing, leather and general handy shit.
Sell it and get something with an existing FOSS firmware. And a laptop (dumpster ones work too). What you're asking for is $1000 upfront, at minimum, with no satisfaction guarantee.
If you're willing to do most of the work yourself, I'd suggest finding an official firmware update and running binwalk
on it. Also take good photos of the PCB and look for datasheets of every chip. Then you'll be able to pose specific questions and maybe get decent help.
Still, it's probably best to set up ONVIF client software or something.
Canonical Releases Ubuntu 25.04 Plucky Puffin
Hardware enablement highlights
Canonical continues to enable Ubuntu across a broad range of hardware. The introduction of a new ARM64 Desktop ISO makes it easier for early adopters to install Ubuntu Desktop on ARM64 virtual machines and laptops.
Qualcomm Technologies is proud to collaborate with Canonical and is fully committed to enabling a seamless Ubuntu experience on devices powered by SnapdragonĀ®. *Ubuntuās new ARM64 ISO paves the way for future Snapdragon enablement, enabling us to drive AI innovation and adoption together.
Leendert van Doorn, SVP, Engineering at Qualcomm Technologies, Inc.*
The latest interim release of Ubuntu introduces ādevpacksā for popular frameworks like Spring, along with performance enhancements across a broad range of hardware. 17 April 2025 Today Canonical announced the release of Ubuntu 25.04, codenamed āPlucky Puf
The latest interim release of Ubuntu introduces ādevpacksā for popular frameworks like Spring, along with performance enhancements across a broad range of hardware. 17 April 2025 Today Canonical announced the release of Ubuntu 25.Canonical (Ubuntu)
like this
[solved] What ports do I need to open for mDNS?
EDIT: The bad solution is to unblock UDP port 5353 but the port has to be source port, not destination port. (--sport
flag) See the now modified rules. The issue is that this is very insecure (see this stackexchange question and comments) but obviously better than no firewall at all because at least I'm blocking TCP traffic.
The proper solution (other than using glibc and installing nss-mdns
package) is to open a port with netcat (nc
) in the background (using &
) and then listen with dig
on that port using the -b
flag.
port="42069"
nc -l -p "$port" > /dev/null || exit 1 &
dig somehostname.local @224.0.0.241 -p 5353 -b "0.0.0.0#${port}"
Then we need to remember to kill the background process. The DNS reply will now be sent to port 42069, so we can just open it with this iptables rule:
-A INPUT -p udp -m udp --dport 42069 -j ACCEPT
---->END OF EDIT.
I want to setup iptables firewall but if I do that, it blocks multicast DNS which I need. I am using command
dig "somehostname.local" @224.0.0.251 -p 5353
to get the IP through mDNS and these are my iptables rules (from superuser.com):
*filter
# drop forwarded traffic. you only need it of you are running a router
:FORWARD DROP [0:0]
# Accept all outgoing traffic
:OUTPUT ACCEPT [623107326:1392470726908]
# Block all incoming traffic, all protocols (tcp, udp, icmp, ...) everything.
# This is the base rule we can define exceptions from.
:INPUT DROP [11486:513044]
# do not block already running connections (important for outgoing)
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
# do not block localhost
-A INPUT -i lo -j ACCEPT
# do not block icmp for ping and network diagnostics. Remove if you do not want this
# note that -p icmp has no effect on ipv6, so we need an extra ipv6 rule
-4 -A INPUT -p icmp -j ACCEPT
-6 -A INPUT -p ipv6-icmp -j ACCEPT
# allow some incoming ports for services that should be public available
# -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
# -A INPUT -p udp -m udp --dport 5353 -j ACCEPT # does not help
-A OUTPUT -p udp -m udp --sport 5353 -j ACCEPT # SOLVES THE ISSUE BUT IS INSECURE - not recommended
# commit changes
COMMIT
Any help is welcome š
Deny all incoming connections with iptables?
I want to make some simple iptables rules to deny all incoming connections and allow outgoing. How can I do that?Super User
As I said, Iām not sure about that.
Still, dig wonāt be listening on port 5353 for the answer, itāll open some random port, so the firewall rule for 5353 will not apply. And the conntrack rule, is my guess, also doesnāt apply, because what I think the conntrack module does is:
- Remembers about the outgoing connection (i.e. when dig sends its udp packet out): source port, destination IP and port
- Check incoming packets against this info, and lets them through if they appear to be an answer
Since the outgoing packet is going to multicast, and the incoming packet (I suspect) is coming from the IP of the machine that answers (a different IP therefore), conntrack wouldnāt be able to figure that out. The answer doesnāt match the outgoing packet that dig sends. Since this is just a hunch, I would try to confirm this by looking at the traffic in e.g. wireshark.
Edit 2: Actually dig
picks a random port to send the mDNS request from and sends it to 224.0.0.251:5353 (multicast IP). The correct host then replies from port 5353 to the previously picked random port from dig
. But I found that you can specify the port with dig -b IP#port
so I think that should help. I kinda don't have the time to try it out currently though.
end of edit2.
well I randomly solved it by adding
-A OUTPUT -p udp -m udp --sport 5353 -j ACCEPT
Which basically means you are right. The destination port is just some randomly picked number (checked wireshark), so I have to filter based on source port, which is 5353.
Edit: Also thanks for your help!
crc32sum - Calculate CRC32 for each file (Bash using 7z) - bugfix
Hi all. This is an update on my script extracting CRC32 checksum from the 7z commandline tool. The output should be similar to how the md5sum tool outputs, the checksum and the file name/path.
The initial version of this script was actually broken. It would not output all files if a directory was included (wrong counting of files through argument number). Also filenames that contained a space would only output the first part until the space character. All of this rookie mistakes are solved. Plus there is a progress bar showing what files are processed at the moment, instead showing a blank screen until command is finished. This is useful if there are a lot of files or some big files to process.
Yes, I'm aware there are other ways to accomplish this task. I would be happy to see your solution too. And if you encounter a problem, please report.
(Note: Beehaw does not like the "less than" character and breaks the post completley. So replace the line cat %%EOF
with or copy it from the Github Gist link below:)
\#!/usr/bin/env bash
if [[ "${#}" -eq 0 ]] || [[ "${1}" == '-h' ]]; then
self="${0##*/}"
cat %%EOF
usage: ${self} files...
Calculate CRC32 for each file.
positional arguments:
file or dir one or multiple file names or paths, if this is a directory
then traverse it recursively to find all files
EOF
exit 0
fi
7z h -bsp2 -- "${@}" |
\grep -v -E '^[ \t]+.*/' |
\sed -n -e '/^-------- ------------- ------------$/,$p' |
\sed '1d' |
\grep --before-context "9999999" '^-------- ------------- ------------$' |
\head -n -1 |
\awk '$2=""; {print $0}'
crc32sum - Calculate CRC32 for each file (Bash using 7z)
crc32sum - Calculate CRC32 for each file (Bash using 7z) - crc32sumGist
Agreed on your points and usually I do 2. (name) and 3. (exit instead else) sometimes. For the [[
over [
, it usually matters only for word splitting and globbing behavior, if you do not enclose the variables between quotes I believe. But looking into the shellcheck entry, looks like there is no disadvantage. I may start doing this by default in the future too.
So thanks for the suggestions, I will update the script in a minute.
Edit: I always forget that Beehaw will break if I use the "lower than" character like in, so I replaced it in the post with
cat %%EOF
which requires to change that line. And the example usage is gone for the moment.
Edit2 (21 hours later): I totally forgot to remove the indentation and else-branch. While doing so I also added a special option -h
, in case someone tries that. Not a big deal, but thought this should be.
Sharing some of my newest small Bash scripts using 7z
New version for toarchive: gist.github.com/thingsiplay/88ā¦
(I have added a new version of the script. The old one is renamed to 'toarchive-old'. The new script has some guard rails and more checks. Also original files can be removed automatically on success, like gzip does. But an option -r must be explicitly given here, like toarchive zip -r file.txt. Directories can be removed too, but the option uppercase -R is required here, as in toarchive zip -R my_dir. Have in mind this will use rm -r system command. Although some guard rails are in place to prevent massive fail, you should be very careful. Note that no file is removed, if -r or -R are not used at all.)
I always write little scripts and aliases that help me from time to time. I just wanted to share some of my newest simple scripts. There are probably better or easier ways to do, but writing and testing them is fun too. Both make use of the 7z
command, a commandline archive tool. Posting it here, so anyone can steal them. They are freshly written, so maybe there are edge cases.
(Update April 17, 2025: Note this is a new version that addresses some issues. The old version I had posted was broken.)
\#!/usr/bin/env bash
# Calculate CRC32 for each file.
if [ "${#}" -eq 0 ]; then
echo "crc32sum files..."
echo "crc32sum *.smc"
else
7z h -bsp2 -- "${@}" |
\grep -v -E '^[ \t]+.*/' |
\sed -n -e '/^-------- ------------- ------------$/,$p' |
\sed '1d' |
\grep --before-context "9999999" '^-------- ------------- ------------$' |
\head -n -1 |
awk '$2=""; {print $0}'
fi
toarchive:
\#!/usr/bin/env bash
# Create one archive for each file or folder.
if [ "${#}" -eq -1 ]; then
echo "toarchive ext files..."
echo "toarchive zip *.smc"
else
ext="${1}"
shift
opt=()
stop_parse=false
for arg in "${@}"; do
if [ ! "${stop_parse}" == true ]; then
if [ "${arg}" == "--" ]; then
stop_parse=true
opt+=(--)
continue
elif [[ "${arg}" =~ ^- ]]; then
opt+=("${arg}")
continue
fi
fi
file="${arg}"
7z a "${opt[@]}" "${file}.${ext}" "${file}"
done
fi
crc32sum - Calculate CRC32 for each file (Bash using 7z)
crc32sum - Calculate CRC32 for each file (Bash using 7z) - crc32sumGist
Is there an easy way to create blocklist of post or comment for other people?
I'll bring you straight into my mind: I was scrolling throught the n-th depressing post of the ~~day~~ hour and I thought "If I answer that post/comment by #negativity, will other people be able to filter out this content using my answer?" If not, how could we build some sort of blocklist for people to curate there experience on the fediverse.
I know I can block key word like "politics" "Trump" "Elon" but sometimes it doesn't have a precised word yet use human can categorise it easily.
like this
don't like this
I donāt agree with this particular usecase. Because Iāve personally experienced, people who shun ānegativityā meaning they just completely ignore peopleās suffering which often adds a devastating layer of invisibility to oppression. But probably hopefully this isnāt your case and itās more about ādoom and gloomā than peopleās reality of suffering.
But anyways, I do agree that blocklists are probably a feature that lemmy needs.
like this
don't like this
Help me understand DSP on Linux?
And also on computers generally lol.
The situation: I'm trying out Bitwig on my geriatric computer, which is running Linux Mint. It seems that I can't do very much without spiking the DSP, leading to awful glitchiness in playback. However, according to btop, the CPU (i7 4770) load isn't breaking 30%, spread evenly across the cores.
Things I have tried:
- uninstalling speech dispatcher, which helped
- tweaking the pipewire config, which doesn't seem to have helped much
So... what is the bottleneck here?
EDIT: the (main) issue was that my user didn't have real time priority permissions. An edit to /etc/security/limits.conf has improved things immeasurably.
This is what resistance to the digital coup looks like
This is what resistance to the digital coup looks like
Technological platforms are not neutral. If we truly want to resist the digital coup that is currently under way, we need to normalize the use of free, open source solutions.Elena Rossini
like this
don't like this
like this
like this
Just because xitter and facebook and google failed to find a way to monetize it and therefore tried to kill it doesn't make it dead.
like this
She touches on the aspect of monetization and claims that "you could save money by being on the Fediverse".
Yes, in theory it is possible. In practice this is something that only is available for the already-famous journalists who have enough pull to move their audience from Substack to their own property.
For everyone else, the Fediverse is (a) too small and (b) too "anti-money" to encourage professionals to even try making a living here. They stay on Substack for the same reason that video creators stay on YouTube: it's a horrible master, but at least it lets them pay their bills.
like this
supersquirrel doesn't like this.
Why I'm breaking up with Windows
I'm going back to Linux after ~8 years of maining Windows. I was a Linux desktop and server user back in college and did all my dev on there. When I got my first job, I bought a better laptop and started maining Windows.
I am going back to Linux for three main reasons: I hate the Windows 11 UI, I'm increasingly paranoid about privacy/security, and the development experience for native software has sucked for a long time.
Besides the obvious downward spiral in UI since Windows 7, it's also become unreliable and slow. Some days, File explorer just won't open. Others, it takes a full minute to load my "home" view, and some others I get weird bugs where the color settings are broken or I can't actually click on folders anymore. The start menu is slow to open when pressing the Windows key, windows search is slow to index and sometimes looks stuff up on Bing instead of opening a file. The default apps (calculator, image viewer, media player) have been getting replaced with slower UWP versions with flatter and flatter UI. Finally, Windows is increasingly pushing AI stuff onto the platform, which leads me to privacy/security
I am increasingly paranoid these days about privacy and security. While I don't have any outstanding issues with security at large, I don't trust Microsoft's telemetry collection and I especially don't trust anything that gets sucked up into Windows Recall's AI Black hole. This hasn't been an issue, but I've always wondered why Microsoft hasn't made it simpler to create containerized applications with AppX/Windows SDK. It seems like it should be way easier to create a flatpak-like sandboxed application with any API (Win32, WinForms, WPF, or any language really).
Believe it or not, Windows is a good development platform, these days, unless you're trying to write Windows software. Microsoft, under Satya Nadella, has been taking care of its developer community and making a lot of tools free and some open source. vcpkg has revolutionized my C++ development and I've always been fond of many MSVC extensions such as SAL. There's a lot of pros and cons, but I generally prefer NT API calls over POSIX API calls (which are far more long in the tooth than NT at this point). That said, I tend to just write cross-platform "modern" C++ and don't make too many system calls anymore. I will miss Visual Studio (and the ease of SLN/Vcxproj files), and it seems like the only comparable C++ IDE available for Linux is CLion. I'm actually a fan of DirectX and HLSL over OpenGL and Vulkan: Microsoft has made a lot of really great first party libraries/tools available for DirectX that make it a really fun API to work with when you include DirectXTK. I am one of the rare few users who actually enjoys PowerShell; I prefer piping typed, structured data over piping streams of bytes. I also really hate sh/zsh/bash syntax.
That said: Microsoft has utterly lost the plot on native windows application development. They release a new UI Framework for C# and Whatever the latest managed C++ framework is every 3 or so years, and then immediately fail to support it, subtly changing XAML syntax or .Net namespaces so that your old UWP or WPF code is strangely not compatible anymore. To me, what is most telling about Microsoft's level of commitment to its newest frameworks is the fact that they are still supporting WinForms with modern, cross platform .Net builds, meaning that you can use modern C# and .Net features in a runtime that is supposed to have been replaced by their XAML products a long time ago. The only really viable way to write a DirectX application, and the only way that has any official documentation on it, is STILL to use the original Win32 APIs to create a window and manage IO.
So anyways, I'm not as zealous about Linux as most people on the internet are; I still think Windows is a good software development platform and maybe Microsoft can turn the ship around some day, but I doubt it.
the development experience for native software has sucked for a long time.
For as long as Windows has existed, I have found its APIs to be noisy, awkward, and generally unpleasant to use. It was a major part of why I switched my development focus to Unix a long time ago. I guess this is a matter of personal taste; I wonder how you'll feel about the APIs more commonly used on Linux after five or ten years of using them full-time.
Despite a few niggles (I don't care for Bourne-style shell syntax or Windows shell syntax) I have found my productivity to be better and more enjoyable since the switch. Nowadays, benefits include everything that comes with an open-source ecosystem, like the software install/update model of Linux distros, and the ability to solve or work around library/OS problems myself if I can't wait for someone else to fix something.
And, of course, having a privacy-respecting platform for myself and my users is important to me.
In short, I'm happier here. Welcome.
By the way, if you do cross-platform desktop app development in native code, give Qt a try. It does an excellent job overall.
they are still supporting WinForms with modern, cross platform .Net builds, meaning that you can use modern C# and .Net features in a runtime that is supposed to have been replaced by their XAML products a long time ago.
Microsoft is all about corporate clients, that's why their Windows is backwards compatible down to Windows 95, because there is some big corporation that buys the corporate license in bulk and runs some corporate Windows 95 accounting application on it.
Can the US government force Canonical and Red Hat to disallow downloads and development from non US countries?
Basically the title.
I have seen the EU-OS/Suse discussions for some months now. However, Ubuntu/Arch/Fedora are extremely mature projects. So competing against them will be hard.
I want to know how realistic the scenario (described by the question) is.
but as soon as the inevitable body bags come crashing back into the country, it would kill this administration
No they won't. I've heard all kinds of stories like this for Putin and he's still alive and happy.
He's losing men over 50 years old that are left out and are garbage of society. They wouldn't be participating in any demographic activities anyway.
I bet there are millions of Americans like this too. Coal miners who lost the job, casino players, heavy drinkers. For a hefty sum of money and a chance to be important again they'd do anything. You really underestimate how quick they can be turned into a cannon fodder and how little the society will miss them.
Source: I lived in Russia.
Security is a mess, and why a threat model is important
This post is long and kind of a rant. I don't expect many to read the whole thing, but there's a conclusion at the bottom.
On the surface, recommended security practices are simple:
- Store all your credentials in a password manager
- Use two factor authentication on all accounts
However, it raises a few questions.
- Should you access your 2FA codes on the same device as the password manager?
- Should you store them in the password manager itself?
This is the beginning of where a threat model is needed. If your threat model does not include protections against unwanted access to your device, it is safe for you to store access your 2FA codes on the same device as your password manager, or even in the password manager itself.
So, to keep it simple, say you store your 2FA in your password manager. There's a few more questions:
- Where do you store the master password for the password manager?
- Where do you store 2FA recovery codes?
The master password for the password manager could be written down on a piece of paper and stored in a safe, but that would be inconvenient when you want to access your passwords. So, a better solution is to just remember your password. Passphrases are easier to remember than passwords, so we'll use one of those.
Your 2FA recovery codes are something that are needed if you lose access to your real 2FA codes. Most websites just say "Store this in a secure place". This isn't something you want to store in the same place as those (in this case our password manager), and it's not something you will access often, so it's safe to write it down on a piece of paper and lock it in a safe.
Good so far, you have a fairly simple system to keep your accounts safe from some threats. But, new problems arise:
- What happens if you forget your master passphrase?
- What happens if others need access to your password manager?
The problem with remembering your passphrase is that it's possible to forget it, no matter how many times you repeat it to yourself. Besides naturally forgetting it, things like injuries can arise which can cause you to forget the passphrase. Easy enough to fix, though. We can just keep a copy of the passphrase in the safe, just in case we forget it.
If someone else needs to access certain credentials in your password manager, for example a wife that needs to verify bank information using your account, storing a copy of the password is a good idea here too. Since she is a trusted party, you can give her access to the safe in case of emergencies.
The system we have is good. If the safe is stolen or destroyed, you still have the master passphrase memorized to change the master passphrase and regenerate the 2FA security codes. The thief who stole the safe doesn't have your password manager's data, so the master passphrase is useless. However, our troubles aren't over yet:
- How do you store device credentials?
- How do you keep the password manager backed up?
Your password manager has to have some device in order to access it. Whether it's a phone, computer, tablet, laptop, or website, there needs to be some device used to access it. That device needs to be as secure as your password manager, otherwise accessing the password manager becomes a risk. This means using full disk encryption for the device, and a strong login passphrase. However, that means we have 2 more passwords to take care of that can't be stored in the password manager. We access those often, so we can't write them down and store them in the safe, Remembering two more passphrases complicates things and makes forgetting much more likely. Where do we store those passphrases?
One solution is removing the passwords altogether. Using a hardware security key, you can authenticate your disk encryption and user login using it. If you keep a spare copy of the security key stored in the safe, you make sure you aren't locked out if you lose access to your main security key.
Now to keep the password manager backed up. Using the 3-2-1 Backup Strategy. It states that there should be at least 3 copies of the data, stored on 2 different types of storage media, and one copy should be kept offsite, in a remote location (this can include cloud storage). 2 or more different media should be used to eliminate data loss due to similar reasons (for example, optical discs may tolerate being underwater while LTO tapes may not, and SSDs cannot fail due to head crashes or damaged spindle motors since they do not have any moving parts, unlike hard drives). An offsite copy protects against fire, theft of physical media (such as tapes or discs) and natural disasters like floods and earthquakes. Physically protected hard drives are an alternative to an offsite copy, but they have limitations like only being able to resist fire for a limited period of time, so an offsite copy still remains as the ideal choice.
So, our first copy will be on our secure device. It's the copy we access the most. The next copy could be an encrypted hard drive. The encryption passphrase could be stored in our safe. The last copy could be a cloud storage service. Easy, right? Well, more problems arise:
- Where do you store the credentials for the cloud storage service?
- Where do you store the LUKS backup file and password?
Storing the credentials for the cloud storage service isn't as simple as putting it in the safe. If we did that, then anyone with the safe could login to the cloud storage service and decrypt the password manager backup using the passphrase also stored in the safe. If we protected the cloud storage service with our security key, a copy of that is still in the safe. Maybe we protect it with a 2FA code, and instead of storing the 2FA codes in the password manager, we store it on another device. That solves the problem for now, but there are still problems, such as storing the credentials for that new device.
When using a security key to unlock a LUKS partition, you are given a backup file to store as a backup for emergencies. Plus, LUKS encrypted partitions still require you to setup a passphrase, so storing that still becomes an issue.
Conclusion
I'm going to stop here, because this post is getting long. I could keep going fixing problems and causing new ones, but the point is this: Security is a mess! I didn't even cover alternative ways to authenticate the password manager such as a key file, biometrics, etc. Trying to find "perfect" security is almost impossible, and that's why a threat model is important. If you set hard limits such as "No storing passwords digitally" or "No remembering any passwords" then you can build a security system that fits that threat model, but there's currently no security system that fits all threat model.
However, that doesn't let companies that just say "Store this in a secure place" off the hook either. It's a hand wavy response to security that just says "We don't know how to secure this part of our system, so it's your problem now". We need to have comprehensive security practices that aren't just "Use a password manager and 2FA", because that causes people to just store their master passphrase on a sticky note or a text file on the desktop.
The state of security is an absolute mess, and I'm sick of it. It seems that, right now, security, privacy, convenience, and safety (e.g. backups, other things that remove single points of failure) are all at odds with each other. This post mainly focused on how security, convenience, and safety are at odds, but I could write a whole post about how security and privacy are at odds.
Anyways, I've just outlined one possible security system you can have. If you have one that you think works well, I'd like to hear about it. I use a different security system than what I outline here, and I see problems with it.
Thanks for reading!
Using Mac Keyboard-layout on Linux?
Hey everyone,
as a longtime-Mac user who got used to the typical Mac-keyboard layout and using a Logitech MX Keys (Mac only) I was wondering if there is any chance of adopting the Mac-layout 1:1 on one of my favourite Linux-distros using KDE (desktop PC) without mapping each single key to match the Mac-key?
Is there any base tool I can use for this or any tool I can download to accomplish this?
Thanks in advance!
I use vim; hence the difficulty w remapping the keys and the Mac's belong to my employers.
I would never buy a Mac and I only use Linux so it doesn't make sense to grow accustomed to Mac's quirks; especially so since only every other employer provides Mac's.
secureblue: Hardened Fedora Atomic and Fedora CoreOS images
Not many people have heard about secureblue, and I want to spread the word about it. secureblue provides hardened images for Fedora Atomic and CoreOS. It's an operating system "for those whose first priority is using linux, and second priority is security."
secureblue provides exploit mitigations and fixes for multiple security holes. This includes the addition of GrapheneOS's hardened_malloc, their own hardened Chromium-based browser called Trivalent, USBGuard to protect against USB peripheral attacks, and plenty more.
secureblue has definitely matured a lot since I first started using it. Since then, it has become something that could reasonably be used as a daily driver. secureblue recognizes the need for usability alongside security.
If you already have Fedora Atomic (e.g. Secureblue, Kinoite, Sericea, etc.) or CoreOS installed on your system, you can easily rebase to secureblue. The install instructions are really easy to follow, and I had no issues installing it on any of my devices.
I'd love more people to know about secureblue, because it is fantastic if you want a secure desktop OS!
(In honor of Holiday. You know who you are.)
The container optimized OS
A minimal OS with automatic updates. Scalable and secure.fedoraproject.org
secureblue: Hardened Fedora Atomic and Fedora CoreOS images
cross-posted from: lemmy.ml/post/26453685
Not many people have heard about secureblue, and I want to spread the word about it. secureblue provides hardened images for Fedora Atomic and CoreOS. It's an operating system "for those whose first priority is using linux, and second priority is security."secureblue provides exploit mitigations and fixes for multiple security holes. This includes the addition of GrapheneOS's hardened_malloc, their own hardened Chromium-based browser called Trivalent, USBGuard to protect against USB peripheral attacks, and plenty more.
secureblue has definitely matured a lot since I first started using it. Since then, it has become something that could reasonably be used as a daily driver. secureblue recognizes the need for usability alongside security.
If you already have Fedora Atomic (e.g. Secureblue, Kinoite, Sericea, etc.) or CoreOS installed on your system, you can easily rebase to secureblue. The install instructions are really easy to follow, and I had no issues installing it on any of my devices.
I'd love more people to know about secureblue, because it is fantastic if you want a secure desktop OS!
The container optimized OS
A minimal OS with automatic updates. Scalable and secure.fedoraproject.org
Interesting thoughts about privacy, security, and all the things
I'm making this post to share some interesting less talked about things about privacy, security, and other related topics. This post has no direct goal, it's just an interesting thing to read. Anyways, here we go:
I made a post about secureblue, which is a Linux distro* (I'll talk about the technicality later) designed to be as secure as possible without compromising too much usability. I really like the developers, they're one of the nicest, most responsible developers I've seen. I make a lot of bug reports on a wide variety of projects, so they deserve the recognition.
Anyways, secureblue is a lesser known distro* with a growing community. It's a good contrast to the more well known alternative** Qubes OS, which is not very user friendly at all.
* Neither secureblue, nor Qubes OS are "distros" in the classical sense. secureblue modifies and hardens various Fedora Atomic images. Qubes OS is not a distro either, as they state themselves. It's based on the Xen Hypervisor, and virtualizes different Linux distros on their own.
** Qubes OS and secureblue aren't exactly comparable. They have different goals and deal with security in different ways, just as no threat model can be compared as "better" than any other one. This all is without mentioning secureblue can be run inside of Qubes OS, which is a whole other ballpark.
secureblue has the goal of being the most secure option "for those whose first priority is using Linux, and second priority is security." secureblue "does not claim to be the most secure option available on the desktop." (See here) Many people in my post were confused about that sentence and wondered what the most secure option for desktop is. Qubes OS is one option, however the secureblue team likely had a different option in mind when they wrote that sentence: Android.
secureblue quotes Madaiden's Insecurities on some places of their website. Madaiden's Insecurities holds the view that Linux is fundamentally insecure and praises Android as a much better option. It's a hard pill to swallow, but Madaiden's Insecurities does make valid criticisms about Linux.
However, Madaiden's Insecurities makes no mention of secureblue. Why is that? As it turns out, Madaiden's Insecurities has not been updated in over 3 years. It is still a credible source for some occasions, but some recommendations are outdated.
Many people are strictly anti-Google because of Google's extreme history of privacy violations, however those people end up harming a lot of places of security in the process. The reality is, while Google is terrible with privacy, Google is fantastic with security. As such, many projects such as GrapheneOS use Google-made devices for the operating system. GrapheneOS explains their choice, and makes an important note that it would be willing to support other devices as long as it met their security standards. Currently only Google Pixels do.
For those unfamiliar, GrapheneOS is an open source privacy and security focused custom Android distribution. The Android Open Source Project (AOSP) is an open source project developed by Google. Like the Linux kernel, it provides an open source base for Android, which allows developers to make their own custom distributions of it. GrapheneOS is one such distribution, which "DeGoogles" the device, removing the invasive Google elements of the operating system.
Some Google elements, such as Google Play Services can be optionally installed onto the device in a non-privileged way (see here and here). People may be concerned that Google Pixels can still spy on them at a hardware level even with GrapheneOS installed, but that isn't the case.
With that introduction of secure Android out of the way, let's talk about desktop Android. Android has had a hidden option for Desktop Mode for years now. It's gotten much better since it was first introduced, and with the recent release of Android 15 QPR2, Android has been given a native terminal application that virtualizes Linux distros on the device. GrapheneOS is making vast improvements to the terminal app, and there are many improvements to come.
GrapheneOS will also try to support an upcoming Pixel Laptop from Google, which will run full Android on the desktop. All of these combined means that Android is one of, if not the, most secure option for desktop. Although less usable than some more matured desktop operating systems, it is becoming more and more integrated.
By the way, if you didn't know, Android is based on Linux. It uses the Linux kernel as a base, and builds on top of it. Calling Qubes OS a distro would be like calling Android and Chrome OS distros as well. Just an interesting fact.
So, if Android (or more specifically GrapheneOS) is the most secure option for desktop, what does that mean in the future? If the terminal app is able to virtualize Linux distros, secureblue could be run inside of GrapheneOS. GrapheneOS may start to become a better version of Qubes OS, in some respects, especially with the upcoming App Communication Scopes feature, which further sandboxes apps.
However, there is one bump in the road, which is the potential for Google to be broken up. If that happens, it might put GrapheneOS and a lot of security into a weird place. There might be consequences such as Pixels not being as secure or not supporting alternative Android distributions. Android may suffer some slowdowns or halts in development, possibly putting more work on custom Android distribution maintainers. However, some good may come from it as well. Android may become more open source and less Google invasive. It's going to be interesting to see what happens.
Speaking of Google being broken up, what will happen to Chrome? I largely don't care about what happens to Chrome, but instead what happens to Chromium. Like AOSP, Chromium is an open source browser base developed by Google. Many browsers are based on Chromium, including Brave Browser and Vanadium.
Vanadium is a hardened version of Chromium developed by GrapheneOS. Like what GrapheneOS does to Android, Vanadium removes invasive Google elements from the browser and adds some privacy and security fixes. Many users who run browser fingerprinting tests on Vanadium report it having a nearly unique fingerprint. Vanadium does actually include fingerprint protections (see here and here), but not enough users use it for it to be as noticeable as the Tor Browser. "Vanadium will appear the same as any other Vanadium on the same device model, and we don't support a lot of device models." (see here)
There's currently a battle in the browser space between a few different groups, so mentioning any browser is sure to get you involved in a slap fight. The fights usually arise between these groups:
- The group that is strictly anti-Google and uses Firefox-based browsers
- The security focused group that recognizes that Firefox is insecure and opts for privacy enhanced versions of Chromium
- The political group that only care about the politics behind an organization rather than the code itself (examples: Firefox Terms of Use update, Brave Browser including a crypto wallet)
For that last one, I would like to mention that Firefox rewrote the terms after backlash, and users have the ability to disable bloatware in Brave. Since Brave is open source, it is entirely possible for someone to make a fork of it that removes unwanted elements by default, since Brave is another recommended browser by the GrapheneOS team for security reasons.
Another interesting Chromium-based browser to look at is secureblue's Trivalent, which was inspired by Vanadium. It's a good option for users that use Linux instead of Android as a desktop.
Also, about crypto, why is there a negativity around it? The reason is largely due to its use in crime, use in scams, and use in investing. However, not all cryptocurrencies are automatically bad. The original purpose behind cryptocurrency was to solve a very interesting problem.
There are some cryptocurrencies with legitimate uses, such as Monero, which is a cryptocurrency designed to be completely anonymous. Whether or not you invest in it is your own business, and unrelated to the topics of this post. Bitcoin themselves even admit that Bitcoin is not anonymous, so there is a need for Monero if you want fully decentralized, anonymous digital transactions.
On the topic of fully decentralized and anonymous things, what about secure messaging apps? Most people, even GrapheneOS and CISA, are quick to recommend Signal as the gold standard. However, another messenger comes up in discussion (and my personal favorite), which is SimpleX Chat.
SimpleX Chat is recommended by GrapheneOS occasionally, as well as other credible places. This spreadsheet is my all time favorite one comparing different messengers, and SimpleX Chat is the only one that gets full marks. Signal is a close second, but it isn't decentralized and it requires a phone number.
Anyways, if you do use Signal on Android, be sure to check out Molly, which is a client (fork) of Signal for Android with lots of hardening and improvements. It is also available to install from Accrescent.
Accrescent is an open source app store for Android focused on privacy and security. It is one of the default app stores available to install directly on GrapheneOS. It plans to be an alternative to the Google Play Store, which means it will support installing proprietary apps. Accrescent is currently in early stages of development, so there are only a handful of apps on there, but once a few issues are fixed you will find that a lot of familiar apps will support it quickly.
Many people have high hopes for Accrescent, and for good reason. Other app stores like F-Droid are insecure, which pose risks such as supply chain attacks. Accrescent is hoped to be (and currently is) one of the most secure app stores for Android.
The only other secure app store recommended by GrapheneOS is the Google Play Store. However, using it can harm user privacy, as it is a Google service like any other. You also need an account to use it.
Users of GrapheneOS recommend making an anonymous Google account by creating it using fake information from a non-suspicious (i.e. not a VPN or Tor) IP address such as a coffee shop, and always use a VPN afterwards. A lot of people aren't satisfied with that response, since the account is still a unique identifier for your device. This leads to another slap fight about Aurora Store, which allows you to (less securely) install Play Store apps using a randomly given Google account.
The difference between the Play Store approach and the Aurora Store approach is that Aurora Store's approach is k-anonymous, rather than... "normal" anonymity. The preference largely comes down to threat models, but if you value security then Aurora Store is not a good option.
Another criticism of the Play Store is that it is proprietary. The view of security between open source software and proprietary software has shifted significantly. It used to be that people viewed open source software as less secure because the source code is openly available. While technically it's easier to craft an attack for a known exploit if the source code is available, that doesn't make the software itself any less secure.
The view was then shifted to open source software being more secure, because anyone can audit the code and spot vulnerabilities. Sometimes this can help, and many vulnerabilities have been spotted and fixed faster due to the software being open source, but it isn't always the case. Rarely do you see general people looking over every line of code for vulnerabilities.
The reality is that, just because something is open source, doesn't mean it is automatically more or less secure than if it were proprietary. Being open source simply provides integrity in the project (since the developers make it as easy as possible to spot misconduct), and full accountability towards the developers when something goes wrong. Being open source is obviously better than being proprietary, that's why many projects choose to be open source, but it doesn't have to be that way for it to still be secure.
Plus, the workings of proprietary code can technically be viewed, since some code can be decompiled, reverse engineered, or simply read as assembly instructions, but all of those are difficult, time consuming, and might get you sued, so it's rare to see it happen.
I'm not advocating for the use of proprietary software, but I am advocating for less hate regarding proprietary software. Among other things, proprietary software has some security benefits in things like drivers, which is why projects like linux-libre and Libreboot are worse for security than their counterparts (see coreboot).
Those projects still have uses, especially if you value software freedom over security, but for security alone they aren't as recommended.
Disclaimer before this next section: I don't know the difference in terminology between "Atomic", "Immutable", and "Rolling Release", so forgive me for that.
Also, on the topic of software freedom, stop using Debian. Debian is outdated and insecure, and I would argue less stable too. Having used a distro with an Atomic release cycle, I have experienced far less issues than when I used Debian. Not to mention, if you mess anything up on an Atomic distro, you can just rollback to the previous boot like nothing happened, and still keep all your data. That saved me when I almost bricked my computer motifying /etc/fstab/
by hand.
Since fixes are pushed out every day, and all software is kept as up to date as possible, Atomic distros I argue give more stability than having an outdated "tried and tested" system. This is more an opinion rather than factually measured.
Once I realized the stable version of Debian uses Linux kernel 6.1, (which is 3 years old and has had actively exploited vulnerabilities), and the latest stable version of the kernel is 6.13, I switched pretty quick for that reason among others.
Now, many old kernel versions are still maintained, and the latest stable version of Android uses kernels 6.1 and 6.6 (which are still maintained), but it's still not great to use older kernel versions regardless. It isn't the only insecurity about Debian.
I really have nothing more to say. I know I touched on a lot of extremely controversial topics, but I'm sick of privacy being at odds with security, as well as other groups being at odds with each other. This post is sort of a collection of a lot of interesting privacy and security knowledge I've accrued throughout my life, and I wanted to share my perspective. I don't expect everybody to agree with me, but I'm sharing this in case it ever becomes useful to someone else.
Thanks for taking the time to read this whole thing, if you did. I spent hours writing it, so I'm sure it's gotten very long by now.
Happy Pi Day everyone!
Comprehensive guide to hardening RHEL clones?
Is there some sort of comprehensive guide on hardening RHEL clones like Alma and Rocky?
I have read Madaidan's blog, and I plan to go through CIS policies, Alma and Rocky documentation and other general stuff like KSPP, musl, LibreSSL, hardened_malloc etc.
But I feel like this is not enough and I will likely face problems that I cannot solve. Instead of trying to reinvent the wheel by myself, I thought I'd ask if anyone has done this before so I can use their guide as a baseline. Maybe there's a community guide on hardening either of these two? I'd contribute to its maintenance if there is one.
Thanks.
You raise a valid point. In which case, I want to try and prevent malicious privilege escalation by a process on this system. I know that's a broad topic and depends on the application being run, but most of the tweaks I've listed work towards that to an extent.
To be precise, I'm asking how to harden the upcoming AlmaLinux based Dom0 by the XCP-NG project. I want my system to be difficult to work with even if someone breaks into it (unlikely because I trust Xen as a hypervisor platform but still).
I admit I was a bit surprised by the question since I've never consciously thought about a reason to harden my OS. I always just want to do it and wonder why OSes aren't hardened more by default.
Privilege escalations always have to be granted by an upper-privilege process to a lower-privilege process.
There is one general way this happens.
Ex: root opens up a line of communication between it and a user, the user sends input to root, root mishandles it, it causes undesired behavior within the root process and can lead to bad things happening.
All privilege escalation is two different privilege levels having some form of interaction. Crossing the security boundary. If you wish to limit this, you need to find the parts of the system that cross that boundary, like sudo[1], and remove those from your system.
[1]: sudo is an SUID binary. That means, when you run it, it runs as root. This is a problem, because you as a process have some influence on code that executes within the program (code running as root).
Cider won't start in Linux Mint
Hi,
I'm having a problem with Cider (app for Apple Music). It won't start anymore. The Cider logo appears, but then I get an error "cider is not responding, reload the app?". Obviously I've tried to start it again but it still isn't working.
Can anybody help me? Thanks!
Is there still any hope for static binaries (games) that "just work" across distros?
Has anybody been able to build a statically linked binary that shows a Vulkan surface? I've put some context around this problem in the video. I understand that the vulkan driver has to be loaded dynamically - so it's more of a question whether a statically built app can reliably load and talk with it. I think it should be possible but haven't actually seen anyone make it work. I'm aware of "static-window9" by Andrew Kelley but sadly it doesn't work any more (at least on my Gentoo machine T_T).
(I'm also aware of AppImages but I don't think they're the "proper" solution to this problem - more like a temporary bandaid - better than Docker but still far from perfect)
Static Linking - Battle Lost š®š» AUTOMAT DEVLOG 11
A battle wast lost but the fight for a better gaming UX will keep on going as long as we keep the hope.We're writing AUTOMAT - a game that plays other games ...YouTube
Sounds like flatpaks/appimages with extra steps.
I'm fairly sure the complexity of flatpak/appimage solutions are far more than the static linking of a binary (at least on non-glibc systems). Its often a single flag (Ex: -static) that builds the DLLs into the binary, not a whole container and namespace.
Sounds like flatpaks/appimages with extra steps
Includes all dependencies? āļø
A single file? āļø
Independent of host libraries? āļø
Limited learning curve? āļø
Not sure how appimages handle it internally, but with flatpaks you can even be storage efficient with layers, whereas 100s of static binaries could contain an awful lot of duplicates.
Why I Stopped Using Arch Linux...
Why I Stopped Using Arch Linux...
Hello guys and gals, it's me Mutahar again! I made a bit of an error when updating my system and came across a total break. I spent the last month playing ar...YouTube
[bleed]every single "What distro should I try?" thread: "Mint" "I recommend Mint" "Why use x when you could use Mint?" "MINT!"... /rant
and ye though there are other worthy transitional distros, I shall not see them, for the votes have taught me so. and the people said mint
Help with sed commands
Hi all! I have always only used sed with s///
, becouse I've never been able to figure out how to properly make use of its full capabilities. Right now, I'm trying to filter the output of df -h --output=avail,source
to only get the available space from /dev/dm-2 (let's ignore that I just realized df accepts a device as parameter, which clearly solves my problem).
This is the command I'm using, which works:
df -h --output=avail,source \
| grep /dev/dm-2 \
| sed -E 's/^[[:blank:]]*([0-9]+(G|M|K)).*$/\1/
However, it makes use of grep, and I'd like to get rid of it. So I've tried with a combiantion of
t
, T
, //d
and some other stuff, but onestly the output I get makes no sense to me, and I can't figure out what I should do instead.In short, my question is: given the following output
$ df -h --output=avail,source
Avail Filesystem
87G /dev/dm-2
1.6G tmpfs
61K efivarfs
10M dev
...
How do I only get
87G
using only sed
as a filter?EDIT:
Nevermind, I've figured it out...
$ df -h --output=avail,source \
| sed -E 's/^[[:blank:]]*([0-9]+(G|M|K))[[:blank:]]+(\/dev\/dm-2).*$/\1/; t; /.*/d'
85G
awk
? Printing out a specific column is basically the only thing I actually know how to do with it: df -h --output=avail,source | awk '/dm-2/ {print $1}'
Are you opposed to using awk?
Not at all, I'm just not familiar with it so I find it confusing.
Although, looking at your command, i think I understand what it means
-n
option and add the p
modifier to the s///
command to print out lines where substitution has occured. sed -n 's/your-regexp/replacement/p
pulido
in reply to NaNin • • •Zross?
X-Cross?
X-ross?
Oh, it's just pronounced Cross. Always unsure how people who make decisions like these get the jobs they do.