agree with comment. We use ES quite extensively as a database with huge documents and touchwood we haven't had any data loss. We take hourly backups and it is simple to restore.
You have to get used to eventual consistency. If you want to read after writing even by id, you have to wait for the indexing to be complete (around 1 second).
You have to design the documents in such a way that you shouldn't need to join the data with anything else. So make sure you have all the data you need for the document inside it. In an SQL db you would normalize the data and then join. Here assume you have only one table and put all the data inside the doc.
But as we evolved and added more and more fields into the document, the document sizes have grown a lot (Megabytes) and hitting limits like (max searchable fields :1000 can be increased but not recommended) search buffer limits 100MB).
My take is that ES is good for exploration and faster development but should switch to SQL as soon the product is successful if you're using it as the main db.
In a website or an app, there are specific affordances i.e buttons, dropdowns, gps and text boxes that bound the input and steer the user input to help the user achieve the task.
For the speakers like Alexa and Google Home, voice being the only input allows user to say whatever they want hence making the task space infinite. But the voice recognition and NLP is not in a place where it can recognize everything the user has said. This creates a less than stellar experience with the user having to repeat, rephrase or even worse abandon the task. I think this platform will blow up when NLP/AI is able to detect user intent with near perfect accuracy and is able to make the interaction with the user as fluid as with a well designed app. It doesn't hurt for Amazon to have a large installed base ready to use the platform if/when intent recognition becomes par.
Of course it will never replace phone/desktop as there will be things which we cannot say over voice (secrets) and where it is not possible (loud places) or just not courteous behavior.
> This creates a less than stellar experience with the user having to repeat, rephrase or even worse abandon the task.
Not to mention: constant wondering whether the task can even be accomplished. When a voice assistant rejects your query, in many cases you can't be sure whether it's because it couldn't understand you, or because it can't possibly accept what you said as a valid input in the context it's in. In regular interfaces, visible constraints matter as much as affordances.
Norman would refer to these constraints as "signifiers", indicators of possible affordances. It's interesting how weak voice assistants are at signifying what you can actually do with them.
The dev team can add helpful responses that signify to users the available set of voice commands for tasks it can complete based on keywords it can recognize from a user utterance or simply letting them know they didn't understand their response and they can get a list of actions spoken to them by asking for help. (I've worked on published Alexa skills for several large tech companies.)
I think a cool immersive middle ground will be smart surfaces embedded in wall materials that can display things and will simply list out all actions available or anthropomorphize the smart assistant as like a virtual servant that follows you around serving up facts and doing monotonous IoT actions for you.
Now the privacy and surveillance implications of something like this is another story...
> Now the privacy and surveillance implications of something like this is another story...
Those would be resolved here and elsewhere if the industry could be made to stop trying to own people's data. It's not the data that should be a commodity, it's software and clouds.
Wow.. This is amazing. As someone who mispronounces lot of words because of my education in Indian English which takes most of the sounds from British English, this is very helpful. Though there are google and merriam websters dictionary, sometimes it is not easy to get the pronunciation in a sentence. This fills that gap! I'm also curious know how it is implemented. Thanks & all the best!
I haven't read the full paper to know what hardware it needs, but if it can be put into a headphone like wearable and integrated into phone to be output as text, then we can type on phone as fast as we can think. Looking forward to it! Also once of the major impediments to using voice assistants in public is to not want to shout into the phone and if it can be directly integrated into the assistant, the voice assistant use will explode. The possibilities are amazing!
This technology uses an electrode array referred to as ECoG, which needs to be surgically placed on the surface of the brain. Current technology does not make it possible to have the signal quality from non-invasive methods.
If there is enough information in EEG signals to generate speech, an intervening layer to "convert" EEG signals into ECoG signals is an unnecessary complication.
Unfortunately, there probably isn't; on top of that, EEGs are extremely susceptible to noise from nerve signals going to the head/face, such as clenching your jaw or raising your eyebrows
ECoG has much higher temporal and spatial resolution than EEG. Conversion from EEG to ECoG is not likely, the opposite can be done, but not very useful though.
Completely agree about the inspiration part. I can imagine the laughter between the two brothers as they watched early staged prototypes fail and water spewed out...very cool.
I used to lurk on quora. Once I read an answer to anonymous question, my feed started to full of those anonymous questions. I tried hard to curate my feed, by closing the questions I didn't want to see. But I got the same kinds of things still, albeit with different questions. Agree I saw some of those questions, but it seemed as if I could no longer curate the feed with what I wanted and the algorithm especially popular on quora feed couldn't be avoided. So I started logging out and viewing answers of users I am interested in the answers of by directly going to their profile. But quora started pestering me to login if I viewed more than two questions, so I gave up and stopped using the site. It is a shame as there are some good writers whose content is only available on Quora as far as I know but now I cannot read it without reading all the crap answers in my feed.
My take is that ES is good for exploration and faster development but should switch to SQL as soon the product is successful if you're using it as the main db.