Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I disagree. I have used Alexa and Google Assistant a lot and also developed an Alexa skill for controlling Dyson's robot vacuum cleaners, so I'm pretty familiar.

The problem is that the set of supported operations are always MUCH smaller than the set of operations people randomly try. You might develop a skill with 200 commands or whatever and think you've covered everything, but people can up with thousands of possible commands just by guessing.

This means if people just do "I'll try asking this..." then probably 80% of the time it won't work. That's an incredibly frustrating experience. You quickly give up and just stick to the features that you know work, and never try to find any new ones.

But I also disagree that OpenAI has the same problem, because LLMs means you don't need to manually add thousands of possible commands, so any random request that people make is MUCH more likely to work.



Honestly I'm disappointed Amazon hasn't married these two technologies either, but I think it must just be too expensive. I would think you could even prompt a midrange LLM to process the request and generate structured data for the limited list of supported prompts for a skill.


Too expensive and they're probably scared about how uncontrollable and unreliable LLMs are. Can't say I blame them really. Every time a company deploys an LLM you get a news story like "hurr durr I tricked it into saying poopy pants" or whatever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: