Nicolas Chauvat wrote: > > > Do you think there is some way that Narval and Piper could share an NLI > > "engine"? > > Yes, it may even be called www.alicebot.org :-) It's one of the best NLI > stuff available in free software I've seen so far. But I could dig other > things out of my bookmarks stack (ThoughtTreasure maybe, hmm... were was > that URL?) I have a ton of refs too. I'll post them to the list later. [speech rec/synth] > How would you like to integrate the same kind of things with Piper ? I'm not sure if you're asking "if I would like to" or "how I would do it". For the former, sure! :-) For the latter, Loci's original MVC design was developed largely because we wanted to do such things: NLI and speech recognition/synthesis. With MVC (Model, View, Control), the "control" is the (NLI) command from the user and the speech recognition, and the "view" is the response from the computer and speech synthesis. And every command gives a response. The neat thing is, with Piper's multiple UI's, the user can control one UI (e.g., Pied/Piper) with another (e.g., Peep/Piper). Imagine saying "connect this node to that" and then seeing it happen graphically! Now imaging connecting nodes graphically and getting a verbal confirmation! Providing we have the NLI, I think that connecting to the speech rec/synth API will be smaller task. Oh, and I do recognize the wonderful uses of speech with AI :-) Cheers. Jeff