Hacker Newsnew | past | comments | ask | show | jobs | submit | ValdikSS's commentslogin

Disposable Root Servers: https://www.thc.org/segfault/

Segfault offers free unlimited Root Servers. A new server (inside a Virtual Machine) is created for every SSH connection.

    - Dedicated Root Server for every user.
    - Pre-installed tools on Kali-Linux.
    - Outgoing traffic is routed through NordVPN/CryptoStorm/Mullvad.
    - Reverse TCP/UDP port on a public IP.
    - Transparent TOR to connect to .onion addresses.
    - Log in via .onion, .gsocket or direct ssh (port 22 or 443).
    - Encrypted DNS traffic (DNS over HTTPS).
    - Pre-configured .onion web server. Just put your files in /onion.
    - Encrypted storage in /sec and /home with your password.
    - Encrypted storage is only accessible while you are logged in. Keys are wiped on log out.
    - Only the user can decrypt the data. We do not have the key.
    - No Logs.
Different 'tilda' services:

    - https://tilde.town/
    - https://tilde.club/
    - https://tilde.fun/
    - https://ctrl-c.club/
    - https://tilde.green/
    - https://tilde.guru/
OG shell access:

    - https://blinkenshell.org/
    - https://freeshell.de/
    - https://sdf.org/

I suspect this service will be abused by all kind of people and will have to shut down.

It's been up for years

or quickly subsidized by three letter agencies

yeah that is what I was thinking "Ah how cute, it's the ops team from a state" lol but probably not - didn't look into / not interested but guessing it's an existing info sec consultancy behind it that do sometimes work those kinda places or banks etc.

>It is where I attach a debugger, it is where I install iotop and use it for the first time. It is where I cat out mysterious /proc and /sys values to discover exotic things about cgroups I only learned about 5 minutes prior in obscure system documentation.

It is, SSH is indeed the tool for that, but that's because until recently we did not have better tools and interfaces.

Once you try newer tools, you don't want to go back.

Here's the example of my fairly recent debug session:

    - Network is really slow on the home server, no idea why
    - Try to just reboot it, no changes
    - Run kernel perf, check the flame graph
    - Kernel spends A LOT of time in nf_* (netfilter functions, iptables)
    - Check iptables rules
    - sshguard has banned 13000 IP addresses in its table
    - Each network packet travels through all the rules
    - Fix: clean the rules/skip the table for established connections/add timeouts
You don't need debugging facilities for many issues. You need observability and tracing.

Instead of debugging the issue for tens of minutes at least, I just used observability tool which showed me the path in 2 minutes.


See I would not reboot the server first before figuring out what is happening. You lose a lot of info by doing that and the worst thing that can happen is that the problem goes away for a little bit.

To be fair, turning it off and on again is unreasonably effective.

I recently diagnosed and fixed an issue with Veeam backups that suddenly stopped working part way through the usual window and stopped working from that point on. This particular setup has three sites (prod, my home and DR), and five backup proxies. Anyway, I read logs and Googled somewhat. I rebooted the backup server - no joy, even though it looked like the issue was there. I restarted the proxies and things started working again.

The error was basically: there are no available proxies, even though they were all available (but not working but not giving off "not working" vibes).

I could bother with trying to look for what went wrong but life is too short. This is the first time that pattern has happened to me (I'll note it down mentally and it was logged in our incident log).

So, OK, I'll agree that a reboot should not generally be the first option. Whilst sciencing it or nerding harder is the purist approach, often a cheeky reboot gets the job done. However, do be aware that a Windows box will often decide to install updates if you are not careful 8)


No, you didn’t diagnose and fix an issue.

You just temporarily mitigated it.


Sometimes that is enough - especially for home machines etc.

I’ve got no problem with somebody choosing to mitigate something instead of fixing it. But it’s just incorrect to apply a blind mitigation and declare that you’ve diagnosed the problem.

what's the ROI on that?

-- leadership


Turning it off and on again is risky. I recently upgraded a robot in Australia, had problems with systemd, so I turned it off. And had to wait a few weeks until it could be turned on again, because tailscaled was not setup persistently, the routing was not setup properly (over a phone), the machine had some problems,...

High risk, low reward. But of course the ultimate test if it's properly setup.

But on the other hand, with my tiny hard real-time embedded controllers, a power cycle is the best option. No persistent state, fast power up, reboot in milliseconds. Every little SW error causes a reboot, no problem at all.


Turning it off and on again is risky. I recently upgraded a robot in Australia, had problems with systemd, so I turned it off. And had to wait a few weeks until it could be turned on again, because tailscaled was not setup persistently, the routing was not setup properly (over a phone), the machine had some problems,...

High risk, low reward. But of course the ultimate test if it's properly setup.


My job as a DevOps engineer is to ensure customer uptime. If rebooting is the fastest, we do that. Figuring out the why is the primary developers’ jobs.

This is also a good reason to log everything all the time in a human readable way. You can get services up and then triage at your own pace after.

My job may be different than other’s as I work at an ITSP and we serve business phone lines. When business phones do not work it is immediately clear to our customers. We have to get them back up not just for their business but for the ability for them to dial 911.


> This is also a good reason to log everything all the time in a human readable way. You can get services up and then triage at your own pace after.

Unless, hypothetically, the logging velocity tickles kernel bugs and crashes the system, but only when the daemon is started from cron and not elsewhere. Hypothetically, of course.

Or when the system stops working two weeks after launch because "logging everything" has filled up the disk, and took two weeks to so do. This also means important log messages (perhaps that the other end is down) might be buried in 200 lines of log noise and backtrace spam per transaction, which in turn might delay debugging and fixing or at isolating at which end of the tube the problem resides.


most failstates arent worth preserving in a SMB environment. In larger environments or ones equipped for it a snapshot can be taken before rebooting- should the issue repeat.

Once is chance, twice is coincidence, three times makes a pattern.


Alternatively, if it doesn't happen again it's not worth fixing, if it does happen again then you can investigate it when it happens again.

I've debugged so many issues in my life that sometimes I'd prefer things to just work, and if reboot helps to at least postpone the problem, I'd choose that :D

seriously, and sometimes it's just not worth investigating. which means its never going to get fixed, and I'd rather go home than create another ticket that'll just get stale and age out.

I fail to understand how your approach is different to your parent.

perf is a shell tool. iptables is a shell tool. sshguard is a log reader and ultimately you will use the CLI to take action.

If you are advocating newer tools, look into nft - iptables is sooo last decade 8) I've used the lot: ipfw, ipchains, iptables and nftables. You might also try fail2ban - it is still worthwhile even in the age of the massively distributed botnet, and covers more than just ssh.

I also recommend a VPN and not exposing ssh to the wild.

Finally, 13,000 address in an ipset is nothing particularly special these days. I hope sshguard is making a properly optimised ipset table and that you running appropriate hardware.

My home router is a pfSense jobbie running on a rather elderly APU4 based box and it has over 200,000 IPs in its pfBlocker-NG IP block tables and about 150,000 records in its DNS tables.


>perf is a shell tool. iptables is a shell tool. sshguard is a log reader and ultimately you will use the CLI to take action.

Well yes, and to be honest in this case I did that all over SSH: run `perf`, generate flame graph, copy the .svg to the PC over SFTP, open it in the file viewer.

What I really wanted is a web interface which will just show me EVERYTHING it knows about the system in a form of charts, graphs, so I can just skim through it and check if everything allright visually, without using the shell and each individual command.

Take a look at Netflix presentation, especially on their web interface screenshots: https://archives.kernel-recipes.org/wp-content/uploads/2025/...

>look into nft - iptables is sooo last decade

It doesn't matter in this context: iptables is using new netfilter (I'm not using iptables-legacy), and this exact scenario is 100% possible with native netfilter nft.

>Finally, 13,000 address in an ipset is nothing particularly special these days

Oh, the other day I had just 70 `iptables -m set --match-set` rules, and did you know how apparently inefficient source/destination address hashing algorithm for the set match is?! It was debugged with perf as well, but I wish I just had it as a dashboard picture from the start.

I'm talking about ~4Gbit/s sudden limitation on a 10Gbit link.


"What I really wanted is a web interface which will just show me EVERYTHING it knows about the system in a form of charts, graphs, so I can just skim through it and check if everything allright visually, without using the shell and each individual command."

Yes, we all want that. I've been running monitoring systems for over 30 years and it is quite a tricky thing to get right. .1.3.1.4.1.33230 is my company enterprise number, which I registered a while back.

The thing is that even though we are now in 2026, monitoring is still a hard problem. There are, however, lots of tools - way more than we had in the day but just like a saw can rip your finger off instead of cutting a piece of wood, well I'm sure you can fill in the blanks.

Back in the day we had a thing called Ethereal which was OK and nearly got buried. However you needed some impressive hardware to use it. Wireshark is a modern marvel and we all have decent hardware. SNMP is still relevant too.

Although we have stonking hardware these days, you do also have to be aware of the effects of "watching". All those stats have to be gathered and stashed somewhere and be analysed etc. That requires some effort from the system that you are trying to watch. That's why things like snmp and RRD were invented.

Anyway, it is 2026 and IT is still properly hard (as it damn well should be)!


> What I really wanted is a web interface which will just show me EVERYTHING it knows about the system in a form of charts, graphs, so I can just skim through it and check if everything allright visually, without using the shell and each individual command.

For this reason, I've created Lightkeeper: https://github.com/kalaksi/lightkeeper to simplify repetitive tasks and provide an efficient view for monitoring. Also has graphs as a recent addition, but screenshots don't show it. You can also drop to a terminal with a hotkey any time.

Ironically, it works over SSH without any additional daemons.


>Oh, the other day I had just 70 `iptables -m set --match-set` rules, and did you know how apparently inefficient source/destination address hashing algorithm for the set match is?! It was debugged with perf as well!

>I'm talking about ~4Gbit/s sudden limitation on a 10Gbit link.

I think you need to look into things if 70 IPs in a table are causing issues, such that a 10Gb link ends up at four Gb/s. I presume that if you remove the ipset, that 10Gb/s is restored?

Testing throughput and latency is also quite a challenge - how do you do it?


Your example is a shell debugging session. You ran perf, checked iptables, inspected sshguard - all via SSH (or locally). The "observability tool" here is shell access to system utilities.

This proves the parent's point: when the unknown happens, you need a shell.


How did you use tracing to check the current state of a machine’s iptables rules?

In this case I used `perf` utility, but only because the server does not have a proper observability tool.

Take a look at this Netflix presentation, especially on the screenshots of their web interface tool: https://archives.kernel-recipes.org/wp-content/uploads/2025/...


That is a command line tool run over ssh. If you have invented a new way to run command line tools, that’s great (and very possible, writing a service that can fork+exec and map stdio), but it is the equivalent to using ssh. You cannot run commands using traces.

With that mindset anything is equivalent to ssh. The command line is not the pinnacle of user interfaces and giving admins full control of the machine isn't the pinnacle of security either.

We need to accept that UNIX did not get things right decades ago and be willing to evolve UX and security to a better place.


Happy to try an alternative. Traces I have tried, and it is not an alternative.

That only works if the people who built the observability tool have thought of everything. They haven't, of course; no one can.

It's great that you were able to solve this problem with your observability tools. But nothing will ever be as comprehensive as what you can do with shell access.

I don't get what the big deal is here. Just... use shell access when you need it. If you have other things in place that let you easily debug and fix some classes of issues, great. But some things might be easier to fix with shell access, and you could very easily run into something you can't figure out without ssh.

Completely disabling shell access is just making things harder for you. You don't get brownie points or magical benefits from denying yourself that.


> That only works if the people who built the observability tool have thought of everything. They haven't, of course; no one can.

but the tool draws data from forums of people who have had the problem I’m having before.


>What I need are tools for when the unknown happens.

There are tools which show what happens per process/thread and inside the kernel. Profiling and tracing.

Check Yandex's Perforator, Google Perfetto. Netflix also has one, forgot the name.



Spent about 2 years improving printing and scanning stack of Linux: CUPS, SANE, AirSane, as well as some legacy drivers, and also x86 proprietary driver emulation on ARM with Box86.

Even that "modern" printing stack in Linux is 20+ years old, there's still such an unbelievable amount of basic bugs and low-hanging-fruit optimizations, that it's kinda sad.

Not to mention that it still maintains ALL its legacy compatibility, as in supporting ≈5 different driver architectures, 4 user-selectable rasterizers (each with its own bugs and quirks).

The whole printing stack is supported by 4 people, 2 of whom are doing that since the inception of CUPS in 1999. Scanning is maintained by a single person.

Ubuntu 26.04 LTS is expected to be the last version with CUPS v2. CUPS v3 drops current printer driver architecture and introduces proper modern driverless printing with the wrapper for older drivers. Many open-source drivers are already use this wrapper, but expect a huge disarrangement from the users, as none of the proprietary drivers would work out of the box anymore.

Do you care about printing? Want to improve printing & scanning stack? Contact OpenPrinting! https://github.com/OpenPrinting/


This is awesome, thank you for doing this work; it’s not glamorous but it’s a key feature to making computers productive.

I know it’s not a popular opinion here but I think that Windows has two killer features that are always overlooked- the standard print dialog (and all the underlying plumbing), and the standard file dialog (at least until Windows 8).

The ability to print and to interact with files, that just works, without having to retrain people every time a new OS comes out, and without having to reprogram your apps or write your own drivers and/or UI, is incredibly important.

Yes, I know Linux and Mac have the same, but IMO Windows was light years ahead for decades, and is still more consistent and easy to use.


Mac has always had print to PDF from the start, I'm not sure if even the latest windows comes with that OOB. I'm sure Linux is the same (as in the same as Mac).

Windows has had print to pdf out of the box since windows 10 (approx mid 2015)[1]

[1]: https://pdfa.org/microsoft-adds-print-to-pdf-native-to-windo...


Fair enough, for me that is very late to the game.

Was going to say the same thing, I'm not a big fan of Windows but the printing Just Works. Having read the OPs explanation of why CUPS is the way it is, yeah, now it makes sense.

Maybe CUPS needs a Heartbleed-scale problem to motivate more support.


Windows has had plenty of vulnerabilities in the print spoiler. Not surprisingly really as it has been developed over decades.

Great initiative. I wonder, how likely is it for a complete beginner to break their own printer or scanner by making a mistake in driver implementation? Or is it possible to work on hardware support without having a physical device? I assume it is impossible to test each and every one printer and scanner, so there is probably some clever tricks there, right?

I work mostly with the old microcontroller-based cheap consumer ("GDI") USB models circa 2000-2010, these are hardly possible to brick with software, as some of them even don't have a firmware and expect the PC to upload it on each power on sequence.

The hardware safety mechanisms are usually robust (USB communication is handled by "Formatter Board", all the mechanical stuff is in the "Engine Controller" power).

Newer Linux-based models have filesystems, software, and vulnerabilities, printer hacking on Pwn2Own is an every year common occurrence. These could be permanently bricked by the software in a common sense, and would require a firmware reflash using the bootloader or external means.

>Or is it possible to work on hardware support without having a physical device?

Absolutely, but for me this is very inconvenient—like the debugging over the phone.

Sometimes the bug is as low-level as in the USB stack: https://lore.kernel.org/linux-usb/3fe845b9-1328-4b40-8b02-61...

>I assume it is impossible to test each and every one printer and scanner, so there is probably some clever tricks there, right?

Not much, unfortunately. There's ongoing work on modern (driverless) printer behavior emulation, but it is under heavy development and not ready yet: https://github.com/OpenPrinting/go-mfp

Nothing for the older printers and scanners which require it's own driver, of what I'm aware.


You're not actually telling a modern printer: "step this motor so many times to move some assembly so far this way", or "turn on so much current in such and such circuit" or whatever. The driver doesn't have enough responsibility for such things to be able to break anything.

A printer driver is something like a protocol converter. Roughly speaking, it binds some printing API's in the some kind of printer framework or service on the host to the right language (which may have vendor-specific nuances even if it is some kind of standard0.


Any relation to this project? https://www.opentools.studio

Nah, I got into printing because nobody made a commercially available print server, and I ended up making my own, with all the involvement in the stack in the process.

I wish Openprinter luck, as it has been announced in the end of September but nothing out there yet, not even the crowdfunding campaign.


Ah dang, I was hoping. I'm super interested in that and love that you're modernizing something BigTech doesn't seem to care about.

Thank you. I have thrown printers out the literal window.

Ah, so you're Russian then. If you were American you'd have shot them.

> If you were American you'd have shot them.

Archaeological evidence strongly suggests earlier Americans preferred hands, feet, and occasional repurposed sporting equipment.

https://m.youtube.com/watch?v=pD2xBXm4y70


I was wondering whether I should post a link to that, but figured the shooting fitted better. There's actually a place (or used to be) somewhere near Silicon Valley where you can take your least favourite piece of IT gear and blast it with rented firearms, pretty sure it was in operation at the time Office Space was made.

> There's actually a place (or used to be) somewhere near Silicon Valley where you can take your least favourite piece of IT gear and blast it with rented firearms

The entire state of Nevada?


> Thank you. I have thrown printers out the literal window.

I have literally thrown one of those "winmodems" [1] out of the window back in the days. I then went out and drove with my car on it. I then put it in a bench vise until its PCB shattered to pieces. Utter destruction, much to the amusement of my brothers.

These were the days.

And big thanks to GP for his work of CUPS / Linux printing.

[1] https://en.wikipedia.org/wiki/Softmodem


I miss when hatred for technology was tangible.

He's our hero

Is that your article?

>I have a collection of over 10,000 PDF files

Could you upload it somewhere please? I need a collection to test CUPS pdftopdf converter for printing, as well as rasterizers, such as ghostscript, poppler, mupdf and others.



>right now I have 4K on 24", at 200% scaling

I have Dell P2415Q, from 2015. There are, like, 4 other (legacy) models of 24" 4K out there, and that's it. I've no idea why they don't manufacture them.


Yes it's sad :'( I have the much cheaper LG 24UD58. Also no longer being manufactured.

I also have one bought used. It’s the only way to have 4k and 200% scaling on Linux without everything being too big or too small. Size and ppi are perfect but sadly other aspects are becoming really dated (bad colors and contrast, high latency, low refresh rate etc).

Huh. Maybe your particular sample is degrading?

I also have one, and it's holding up pretty well. A month or so ago I broke out my colorimeter and it had almost 100% sRGB at around 120 cd/m2. I don't recall the delta E, but it was very low.

While I didn't measure the backlight, it does seem to not go as bright as before, judging by the levels I set in the OSD. I never went above 70% or so when the sun was shining in the room (not directly on the screen, though), so it didn't have any effect on me.

I understand there are two version, I have the second one. But I don't think there's a difference in the panel itself, I think the change was related to HDMI support.

I can't comment on the latency, the only games I played on it were Civilization and Anno Something. Never had a problem for this.


I have been using P2415Qs for over 10 years by now. Replaced some, bought second hand, had to ship to Dell at one point because of the wake issues (pointless: they never really fixed them), so I know the drill. There are actually 4 versions.

The last new one I bought in 2018 I actually paid the same price for it than when I bought my first one in 2015, so it is one of those few computer accessories that significantly increased in price over its course rather than decrease.

If you cannot see the P2415Q degrading and/or being generally crap in any metric (EXCEPT DPI) when compared to even the non-IPS black Dell monitors from this decade, you are simply blind. They are early HiDPI-revival-era panels, and it shows.

Some of the newer IPS black panels are so good that it is tempting to just take the DPI hit and go 27''... albeit with care as it seems Dell has decided this last year to put some filter that further increases blurriness.


I don't pretend that I have the best vision out there, though I don't think I'm completely blind since I don't run into things. But I actually measured this display, and it's within specs. So maybe both I and my colorimeter are blind? Sure, it's not absolutely impossible, but how likely is it?

I actually have a newer "ips black" dell, an ultrasharp 3223qe and yes, it's much better.

But what I'm saying is that the old one is still good. However, I never pretended it was as good as current models. That's moving goalposts. The initial comment was about the display degrading, so comparing it to itself when new (not even other similar models from that era!). Mine only seems to have become somewhat dimmer, but not enough to matter in my day-to-day use since it's still brighter than I need.


I have the same Dell (since 2016) and love it. But eventually I transitioned last year to a 27" 4K monitor. Still almost as sharp (KDE at 175% works fine for me).

Migrated to 5k 27" 200% Shaaarrp!

Downsizing from a 27" 5k to a 24" 4k, could not find anything besides a new company called JAPANNEXT (they are French)

yeah, I've tried their 24" 4k monitor, was okay, but not great, so returned. 24" is the max size I can tolerate with short-sightedness, but avoid using glasses for the monitor.

Good thing I have three of them. I’m set for life.

What's encrypted printing? How does it work?

grsecurity project has fixed many security bugs but did not contribute back, as they're profiting from selling the patchset.

It's not uncommon for the bugs they found to be rediscovered 6-7 years later.

https://xcancel.com/spendergrsec


This implies (or states, hard to say) that they don't upstream specifically in order to profit. That is nonsense.

1. Tons of bugs are reported upstream by grsecurity historically.

2. Tons of critical security mitigations in the kernel were outright invented by that team. ASLR, SMAP, SMEP, NX, etc.

3. They were completely FOSS until very recently.

4. They have always maintained that they are entirely willing to upstream patches but that it's a lot of work and would require funding. Upstream has always been extremely hostile towards attempts to take small pieces of Grsecurity and upstream them.


But the patchset should use the same license as the original code, shouldn't?


> as they're profiting from selling the patchset

Profiting from selling their patchset is not the whole story, though. grsec was public and free for a long time and there were many effects at play preventing the kernel from adopting it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: