Please be aware that when you use tailscale funnel you announce to the whole world that your service exists (through certificate transparency), and you will get scanned immediately. If you don't believe me just put up a simple http server and watch the scanning request come in within seconds of running `tailscale funnel`.
Do not expose anything without authentication.
And absolutely do not expose a folder with something like `python -m http.server -b 0.0.0.0 8080` if you have .git in it, someone will help themselves to it immediately.
If you are aware of this, funnel works fine and is not insecure.
Tailscale IMHO failing in educating people about this danger. They do mention in on the docs, but I think it should be a big red warning when you start it, because people clearly does not realise this.
I took a quick look a while ago and watching just part of the CT firehose, I found 35 .git folders in 30 minutes.
No idea if there was anything sensitive I just did a HEAD check against `.git/index` if I recall.
It caught my eye also but the article was interesting so I'll forgive OP :-)
On the topic of tamagotchi, if you happen to have a flipper zero there is emulator for it :-) my kid enjoyed it for while and it saved me a few bucks from having to buy one.
This is a bit of sidetrack, but in case someone is interested in reading their history more easily. My conversations.html export file was ~200 MiB and I wanted something easier to work with, so I've been working on a project to index and make it searchable.
It uses the pagefind project so it can be hosted on a static host, and I made a fork of pagefind which encrypts the indexes so you can host your private chats wherever and it will be encrypted at rest and decrypted client-side in the browser.
(You still have to trust the server as the html itself can be modified, but at least your data is encrypted at rest.)
One of the goals is to allow me to delete all my data from chatgpt and claude regularly while still having a private searchable history.
It's early but the basics work, and it can handle both chatgpt and claude (which is another benefit as I don't always remember where I had something).
this is not something I came up with, Simon wrote it and I liked the differentiation between "vibe coding" where there is less effort
for this case project I think I would actually go back and say it's vibe coded, but I didn't want to just call it vibe coding because I did spend time going back and forth and directing the agent
Interesting distinction. I've previously heard vibe coding described as "vibe prompting, but you actually do some work." That aside though, I just call what you're describing as coding with AI.
coding with AI is coding just as much as coding with VSCode is coding. you decide which parts you get help from a given tool and which you don’t. end of the day, it is all coding and “coding with AI” sounds as silly as “coding with keyboard / microphone”
The first part is exactly my point, but the latter is nonsense in my book. You cannot ask VSCode (pre-AI) to write a program for you. It's akin to doing math with AI vs. an Nspire CAS. There's no reason to think you need to respond to those who shame vibe coding with claims that we shouldn't differentiate our tools, but we also shouldn't just say it's all the same. We wouldn't claim that about farming with a laser-powered weed killer compared to farming with a horse-drawn plow.
For software, but that's a well trodden path at this point. I've seen a few projects that are actually "vibe engineering" outside of software on the 3d modeling side so the terms are confusing.
thanks for the info, I'll see if I can get a agent to fix it
it's a static webpage, the source is available with right-click view source, I added a BSD2 licence header to it to make clear it's fine to take and do mostly whatever with
And if you are open to bug reports.. if I move around the drawings move smoothly with the map, but if I zoom in/out the drawings move only after the map zooming animation ends, rather than smoothly
Looks useful but doesn't work quite as expected for me.
In Vivaldi location tracking doesn't work.
Version
7.7.3851.66 (Official Build) (64-bit)
Chromium Version
142.0.7444.245
Extended Stable channel (may also include additional security patches)
Channel
Official Build
Platform / OS
Linux - linuxmint 21.3
And in Firefox 146.0.1 on the same machine the URL doesn't get updated.
But not well tested. Try to create a map and copy the url to another map. Now change the first map with more anotations or move the map center and copy the generated url and paste it into the other map on the other browser. That does not work (at least for me on different browsers).
I think I know what you mean, thanks for the report, if you modify the # part on a webpage it's not the same as reloading it, and I doubt I watch for that part changing
yeah, isn't it impressive how fast modern computers can be if you make a bit of effort, in this case I think I told it to just use plain javascript and make sure it's fast :-)
it's a static webpage, the source is available with right-click view source, I added a BSD2 licence header to it to make clear it's fine to take and do mostly whatever with
This is nice and for those who's asking, it's different from ngrok and the others in that you don't need a separate client, (almost) everyone has ssh installed.
To the author, I wish you best of luck with this but be aware (if you aren't) this will attract all kind of bad and malicious users who want nothing more than a "clean" IP to funnel their badness through.
serveo.net [2] tried it 8 years ago, but when I wanted to use it I at some point I found it was no longer working, as I remember the author said there was too much abuse for him to maintain it as a free service
Even the the ones where you have to register like cloudflare tunnels and ngrok are full of malware, which is not a risk to you as a user but means they are often blocked.
Also a little rant, tailscale has their own one also called funnel. It has the benefit of being end-to-end encrypted (in theory) but the downside that you are announcing your service to the world through the certificate transparency logs. So your little dev project will have bots hammering on it (and trying to take your .git folder) within seconds from you activating the funnel. So make sure your little project is ready for the internet with auth and has nothing sensitive at guessable paths.
Just want to say that I appreciate you maintaining this list. It's one of those things I need to do every now and then, so having a place that gives me a current summary of the options is very handy.
As someone who has launched something free on HN before, the resulting signups were around 1/3rd valid users doing cool things and checking things out, and 2/3rds nefarious users.
My service (which doesn't have public access, only via SSH as a client) was used by a ransomware gang, which involved the service in investigation from Dutch CERT and Dubai police.
I run playit.gg. Abuse is a big problem on our free tier. I’d get https://github.com/projectdiscovery/nuclei setup to scan your online endpoints and autoban detections of c2 servers.
Thanks for sharing this. I run packetriot.com, another tunneling service and I ended up writing my own scanner for endpoints using keyword lists I gathered from various infosec resources.
I had done some account filtering for origins coming out of Tor, VPN networks, data centers, etc. but I recently dropped those and added an portal page for free accounts, similar to what ngrok does.
It was very effective at preventing abuse. I also added mechanism for reporting abuse on the safety page that's presented.
Our services were used for C2 as well. I investigated it a bit but eventually decided to just drop TCP forwarding from our free-tier and that reduced our abuse/malware reports for C2 over TCP to zero essentially.
One path I looked at was to use the VirusTotal API to help identify C2's that other security organizations were identifying and leverage that to automatically take down malicious TCP endpoints. I wrote some POCs but did not deploy them. It's something I plan on taking up again at some point next year.
Want to chat on discord? Maybe we could combine efforts to try and stop people abusing our services :). We have a few vendors sending us automated reports, maybe I could open it up for multiple projects.
Do you have funding to cover the paying the bandwidth costs which will ultimately result from this? Or if you're running this from a home network, does anyone know if OP should be concerned of running into issues with their ISP?
The tunnel host appears to be a Hetzner server, they are pretty generous with bandwidth but the interesting thing I learned about doing some scalability improvements at a similar company [0] is that for these proxy systems, each direction’s traffic is egress bandwidth. Good luck OP, the tool looks cool. Kinda like pinggy.
Random thoughts: one can get user's ssh public keys from GitHub on the fly (from `https://github.com/<username>.keys`), so that it requires a valid GitHub account to use this service, without (extra) auth process.
Yeah, this is the next step. I first wanted to understand if this gets any traction. I think I will provide a dockerized version for the server part that you can just run with a simple command and maybe some interface to create api keys and distribute them to your users.
Fair enough from a business standpoint, but seeing as there are massive privacy/security risks involved in exposing your data to an opaque service, the open source component is probably a non-optional aspect of the value prop.
Even if none of these extensions were malicious, they might have some vulnerability that would allow and attacker to get your cookie? Or the developers of those might have unknowingly been phished like what happened last December.
Sorry for just offering speculation, hopefully you figure it out. Even if it was "only" a Reddit account, the feeling of not knowing how it happened and if other things are at risk must be horrible.
https://crxplorer.com/ might help you to inspect your extensions a bit deeper if you are interested and have the knowledge.
And finally, just a comment, passkeys/webauthn/fido keys would not protect against a session cookie theft. They only prevent the login stage from being phished.
I've just had my Amazon account hacked with an order of a gift card. I saw it immediately so I was able to request a refund, change passwords, add 2fa, remove any payment info.
This is probably linked, I still don't understand how this is possible...
Note though that if you don't have swap now, and enable it, you introduce the risk of thrashing [1]
If you have swap already it doesn't matter, but I've encountered enough thrashing that I now disable swap on almost all servers I work with.
It's rare but when it happens the server usually becomes completely unresponsive, so you have to hard reset it.
I'd rather that the application trying to use too much memory is killed by the oom manager and I can ssh in and fix that.
That's not true. Without swap, you already have the risk of thrashing. This is because Linux views all segments of code which your processes are running as clean and evictable from the cache, and therefore basically equivalent to swap, even when you have no swap. Under low-memory conditions, Linux will happily evict all clean pages, including the ones that the next process to be scheduled needs to execute from, causing thrashing. You can still get an unresponsive server under low memory conditions due to thrashing with no swap.
Setting swappiness to zero doesn't fix this. Disabling swap doesn't fix this. Disabling overcommit does fix this, but that might have unacceptable disadvantages if some of the processes you are running allocate much more RAM than they use. Installing earlyoom to prevent real low memory conditions does fix this, and is probably the best solution.
Disabling swap on servers is de-facto standard for serious deployments.
The swap story needs a serious upgrade. I think /tmp in memory is a great idea, but I also think that particular /tmp needs a swap support (ideally with compression, ZSWAP), but not the main system.
> Disabling swap on servers is de-facto standard for serious deployments.
I guess I have not been deploying seriously over the last couple of decades because the (hardware) systems that I deploy all had some swap, even if it was only a file.
Pretty much all the guidelines about swap partitions out there reference old allocator behaviour from way over a decade ago - where you'd indeed typically run into weird issues without having a swap partition, even if you had enough RAM.
Short (and inaccurate) summary was that it'd try to use some swap even if it didn't need it yet, which made sense in the world of enough memory being too expensive, and got fixed at the cost of making the allocator way more complicated when we started having enough memory in most cases.
Nowadays typically you don't need swap unless you work on a product with some constraints, in which case you'd hand tune low memory performance anyway. Just don't buy anything with less than 32GB, and you should be good.
yeah pretty much, also configuring memory limits everywhere where apps allow it. some software also handles malloc failures relatively gracefully, which helps a whole lot (thank you postgres devs)
Ive spent the last day thinking about that, I really can't see any big negative side effects, the only issue that I'd have is being notified of OOM conditions, and that would just be a syslog regex match. Great plan.
I would add a link to the gitlab to the page also, clicking the LICENCE brings me to the source code but other than that there did not seem to be a link .
Out of curiosity, did you use LLM's to code this? My gut feeling tells me at minimum the readme was written by one, or maybe it's normal to use emojis everywhere :-) Also I am not meaning to judge it as good or bad, I'm just curious.
I think one thing that LLM's and coding agents enables, is creating these customised solution which solve a specific problem, in a specific way. Some might consider it wasteful. I bet many thinks your effort would have been better spent contributing to one of the existing ones instead of doing yet another tool, but I find fascinating that we can finally tell our computers what we need and the will do it.
If you hand-wrote everything, then apologies for the unrelated rant :-)
Yes, I used LLMs to develop this. I think the README has more emojis than any mortal could summon. Hehe
I used ChatGPT to design the solution that I wanted and Claude Sonnet to do most of the coding.
I'm trying to figure out what works for me in the brave new world of AI enabled development, so that I can make recommendations to my team.
A few things that really helped me here were:
- Having the gitlab cli (glab) installed and configured was very helpful because it allowed me to do things like lint the CI file and inspect the build output in the LLM context.
- Having the zereight/gitlab-mcp installed was useful as well. Even though I can make Issues and MRs using the CLI, the LLM frequently made escaping mistakes when writing long comment sections. The mcp tool was great for this.
- Almost all of my process started with me describing a bug or feature, then asking the LLM to investigate the feature and create an Issue. From there I tried as much as possible to keep the scope of my work small and exclusively tied to an issue branch.
I'm a reasonably good programmer - I've been at it for 30 years. I think there's no question that LLMs expand my "radius of capability." Just like everyone else, I'm trying to figure out the best way to safely maximize this new world of tools.
Do not expose anything without authentication.
And absolutely do not expose a folder with something like `python -m http.server -b 0.0.0.0 8080` if you have .git in it, someone will help themselves to it immediately.
If you are aware of this, funnel works fine and is not insecure.
Tailscale IMHO failing in educating people about this danger. They do mention in on the docs, but I think it should be a big red warning when you start it, because people clearly does not realise this.
I took a quick look a while ago and watching just part of the CT firehose, I found 35 .git folders in 30 minutes.
No idea if there was anything sensitive I just did a HEAD check against `.git/index` if I recall.
https://infosec.exchange/@gnyman/115571998182819369
reply