Hacker Newsnew | past | comments | ask | show | jobs | submit | radeeyate's commentslogin

For the past year, I've been learning a lot more about electronics, and in particular, designing PCBs, getting them manufactured, and assembled. I've come a long way from where I started, making little LED flashers shaped like trees for Christmas last year (everyone has to start somewhere!) where I'm now making small products with some of the super cheap ATTiny chips and writing code for them.

I really want to get more into microcontrollers, and design some more technical projects. I've been wanting to make a portable point-and-shoot camera for a couple years, though I've never been knowledgeable in that area to do it very well. Though, I'm finally getting to that point.

On a non-electronic-designing front, I'd love to learn more about networking and radios. I'm working on my homelab right now, and just got a nice switch to connect some free 15-year-old office PCs I also have. I'd love to get into AREDN, which is a 802.11 mesh network that can run on amateur radio frequencies.

I also want to write more about my projects on my website (https://radi8.dev,) where hopefully I can share what I work on more often than I currently do.


Image data and telemetry were sent in different messages, so it wasn't too much of a bottleneck. The images were about ~100 bytes while the telemetry was roughly 40.


Thank you for the kind words! The fluorescence was originally meant to be measured with an AS7273 spectrometer (unfortunately bought a different one, still worked fine though), and measuring ~680 nm. Certainly not a great setup but it worked fine. Light was ambient through acrylic, and I found out far too late that UV blocking effects. Despite that, I feel like the data is still somewhat valid, maybe. I did do some testing with it back on earth, though I can't remember how it correlated.

The data I have is here: https://github.com/radeeyate/StratoSpore/blob/main/software/... - just be warned that the altitude data still isn't the exact same as it was while in the air (GPS not working so I had to take it from someone else).


From https://hps.org/publicinformation/ate/q12178/ :

> UV light, a form of energy, is defined as light having wavelengths between 100 nanometers (nm, 1 billionth of a meter in length) and 400 nm. [...]

> Most acrylic plastics will allow light of wavelength greater than 375 nm to pass through the material, but they will not allow UV-C wavelengths (100–290 nm) to pass through.

In terms of photonic permittivity, Glass is better for cold frames and the like, because acrylic filters out UV light.

Also, Hydrogen peroxide (H2O2) is an algaecide.

/? hydrogen peroxide algaecide https://www.google.com/search?q=hydrogen+peroxide+algaecide


I first got into Raspberry Pi Picos, but I've also been experimenting with Esp32's and some of the nRF chips. I mostly do CircuitPython on them but Arduino is a supported platform on those I believe.


I got a couple of RP2040 boards recently and I'm amazed at how easy it is to just get stuff done. Between the native usb support and the circuit python support it's been a breeze. I just got a couple of boards up and running uart in a daisy chain. It was intimidating, but the circuitpython docs made it relatively simple.


Hi HN!

I'm Andrew, and I'm a high school student who participated in Hack Club's Apex program, which funded 30 students to built high altitude balloon (HAB) payloads.

My project, StratoSpore, was an attempt to use algae fluorescence as a biological sensor for detecting changes in altitude and stratospheric conditions.

This challenge involved a full electronics and software design cycle: - I designed PCBs based on Raspberry Pi Picos for sensor logging (AS7264 spectral sensor, temps, etc) and a Raspberry Pi Zero 2 W for data processing - Implementing a highly lossy, custom compression algorith to compress 1080p images down to 18x10 pixels for sending of LoRA (915 Mhz) along with shoving a bunch of telemetry into 45 bytes. - Unfortunately, the payload couldn't be recovered as it is stuck in a dense forest. I also had some GPS problems and had to splice some data.

This was a huge step outside my comfort zone and taught me about hardware design, logistics, and compression.

I'd love to hear and thoughts or technical critiques on the design of software. If you have any questions, I'd love to answer them. Code/hardware is on GitHub (https://github.com/radeeyate/stratospore), and I hope you enjoy the blog post!


And here I am in the US paying $50+/mo for CenturyLink to give me 20Mb down and 0.5Mb up.


And here I am in Hong Kong paying a too high (there's cheaper offerings) 248HKD (USD32) for 2gbit/s up/down


shhhh.... don't say any.... the US is the bestest country in the world...

we're also really good at feeding our poor and disabled too


KVM (as in the switch) was termed in 1995: https://en.wikipedia.org/wiki/KVM_switch


This is where model cards came from, as far as I know: https://arxiv.org/abs/1810.03993


What was meant is that Elon owns Twitter, while Trump owns Truth Social. These platforms are their "personal diaries."


All System Prompts from Anthropic models are public information, released by Anthropic themselves: https://docs.anthropic.com/en/release-notes/system-prompts. I'm unsure (I just skimmed through) to what the differences between this and the publicly released ones are, so they're might be some differences.


This system prompt that was posted interestingly includes the result of the US presidential election in November, even though the model's knowledge cutoff date was October. This info wasn't in the anthropic version of the system prompt.

Asking Claude who won without googling, it does seem to know even though it was later than the cutoff date. So the system prompt being posted is supported at least in this aspect.


I asked it this exact question, to anybody curious https://claude.ai/share/ea4aa490-e29e-45a1-b157-9acf56eb7f8a

edit:fixed link


The conversation you were looking for could not be found.


oops, fixed


> The assistant is Claude, created by Anthropic.

> The current date is {{currentDateTime}}.

> Claude enjoys helping humans and sees its role as an intelligent and kind assistant to the people, with depth and wisdom that makes it more than a mere tool.

Why do they refer to Claude in third person? Why not say "You're Claude and you enjoy helping hoomans"?


LLMs are notoriously bad at dealing with pronouns, because it's not correct to blindly copy them like other nouns, and instead they highly depend on the context.


[flagged]


'It' is obviously the correct pronoun.


There's enough disagreement among native English speakers that you can't really say any pronoun is the obviously correct one for an AI.


"What color is the car? It is red."

"It" is unambiguously the correct pronoun to use for a car. I'd really challenge you to find a native English speaker who would think otherwise.

I would argue a computer program is no different than a car.


People often refer to their car and other people's as "she" ("she's a beauty") so you're is obviously wrong.


But no one who does that thinks they're using proper English!


"she" is absolutely proper English for a ship or boat, with a long history of use continuing into the present day, and many dictionaries also list a definition of "thing, especially machine" or something like that, though for non-ship/boat things the use of "she" is rather less common.


You’re not aligned bro. Get with the program.


I'm not especially surprised. Surely people who use they/them pronouns are very over-represented in the sample of people using the phrase "I use ___ pronouns".

On the other hand, Claude presumably does have a model of the fact of not being an organic entity, from which it could presumably infer that it lacks a gender.

...But that wasn't the point. Inflecting words for gender doesn't seem to me like it would be difficult for an LLM. GP was saying that swapping "I" for "you" etc. depending on perspective would be difficult, and I think that is probably more difficult than inflecting words for gender. Especially if the training data includes lots of text in Romance languages.


LLMs don’t seem to have much notion of themselves as a first person subject, in my limited experience of trying to engage it.


From their perspective they don't really know who put the tokens there. They just caculated the probabilities and then the inference engine adds tokens to the context window. Same with user and system prompt, they just appear in the context window and the LLM just gets "user said: 'hello', assistant said: 'how can I help '" and it just calculates the probabilities of the next token. If the context window had stopped in the user role it would have played the user role (calculated the probabilities for the next token of the user).


> If the context window had stopped in the user role it would have played the user role (calculated the probabilities for the next token of the user).

I wonder which user queries the LLM would come up with.


On one machine I run a LLM locally with ollama and a web interface (forgot the name) that allows me to edit the conversation. The LLM was prompted to behave as a therapist and for some reason also role played it's actions like "(I slowly pick up my pen and make a note of it)".

I changed it to things like "(I slowly pick up a knife and show it to the client)" and then just confront it it like "Whoa why are you threatening me!?", the LLM really tries hard to stay in it's role and then tells things like it did it on purpose to provoke a fear response to then discuss the fears.


Interestingly you can also (of course) ask them to complete for System role prompts. Most models I have tried this with seem to have a bit of an confused idea about the exact style of those and the replies are often a kind of an mixture of the User and Assistant style messages.


Yeah, the algorithm is a nameless, ego-less make-document-longer machine, and you're trying to set up a new document which will be embiggened in a certain direction. The document is just one stream of data with no real differentiation of who-put-it-there, even if the form of the document is a dialogue or a movie-script between characters.


I don’t know but I imagine they’ve tried both and settled on that one.


Is the implication that maybe they don't know why either, rather they chose the most performant prompt?


LLM chatbots essentially autocomplete a discussion in the form

    [user]: blah blah
    [claude]: blah
    [user]: blah blah blah
    [claude]: _____
One could also do the "you blah blah" thing before, but maybe third person in this context is more clear for the model.


> Why do they refer to Claude in third person? Why not say "You're Claude and you enjoy helping hoomans"?

But why would they say that? To me that seems a bit childish. Like, say, when writing a script do people say "You're the program, take this var. You give me the matrix"? That would look goofy.


"It puts the lotion on the skin, or it gets the hose again"


Why would they refer to Claude in second person?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: