Strict Standards: Only variables should be assigned by reference in /home/noahjames7/public_html/modules/mod_flexi_customcode/tmpl/default.php on line 24

Strict Standards: Non-static method modFlexiCustomCode::parsePHPviaFile() should not be called statically in /home/noahjames7/public_html/modules/mod_flexi_customcode/tmpl/default.php on line 54

Strict Standards: Only variables should be assigned by reference in /home/noahjames7/public_html/components/com_grid/GridBuilder.php on line 29

One of the best scenes from Larry David’s tour-de-neuroses Curb Your Enthusiasm opens with Larry sitting at a restaurant. As cheesy music plays, the camera pans out, revealing the guy at the table next to him. He’s sitting alone, but jabbering loudly, reminding someone we can’t see that “on no planet is a shoe caddy a good gift.”

Then comes the reveal: Cut to the other side of this joker’s head, and there’s his Bluetooth headset. Larry, tired of his crap, starts talking loudly to himself. Eventually he fights with the guy next to him, and then they both go back to complaining to the empty chairs in front of them. Jerks.

The episode aired in 2007. Mercifully, the “Bluedouche” problem went away for a while after that—it was replaced by people sitting in silence, staring into their screens, which is at least easier to sit next to. Things are changing again: As we become more reliant on Siri, Google Now, Cortana, and the world of virtual assistants and voice-based apps and platforms, we’re starting to talk to our phones again. But this time, it should be way better.

Right now, we really only had one way to talk to our gadgets: We tap a button, bring the bottom half of our phone to our mouth, and speak extra-clearly into it. But few believe that’s how it’ll always be—and they have plenty of pop culture examples of this future. The earbud from Her, the screens-everywhere world of Total Recall, or the computer in Star Trek. But mostly it’s the earbud from Her.

Everywhere you turn, there’s a company working on this kind of wireless, unobtrusive, forget-it’s-in-there earpiece. Bragi’s Dash is probably the most commonly-cited example, but there’s also the Pearbuds, the OwnPhones, the Motorola Hint, the HearNotes, the Earin buds, the Truebuds, and countless others from companies big and small. Kickstarter’s filthy with the stuff. They’re not just Bluetooth headsets minus the ostentatiousness, they’re an omnipresent way to digitally hear and be heard.

In the Future, How Will We Talk to Our Technology?The Bragi Dash, a set of wireless earbuds you’re meant to wear all the time. BragiLarry David wasn’t mad at the jackass next to him because the earpiece offended his refined aesthetic sensibilities, though. He lost his mind because the guy was yelling. At no one. “The number one rule is, it shouldn’t be a disruptive distraction,” says Diane Gottsman, an etiquette expert who works with executives on business manners. “It’s enormously convenient to speak into Siri,” she says, but don’t be a jerk about it. And luckily, it’s a little easier now to do that.For one thing, microphones are getting better. We’ve spent too long screaming into our headsets just to be heard over the wind or traffic, or stringing our microphone into a mustache two inches above our mouths. Now our mics are clearer, stronger, just better. And so is their software.In the Apple TV demo room after Apple’s recent keynote event, employees were actively discouraging people from holding the remote to their mouth as they spoke search queries to Siri. It works just fine, they said, if you just leave it at your side. This is a triumph of noise-cancellation, using software to throw out all the background noise and better isolate the thing we actually want to hear. As our phones collect Tech like HD Voice makes us sound even better, too, and new low-power processors make sure phones are listening all the time.Qualcomm’s latest Snapdragon 820 processor, which you’ll find in virtually every flagship phone, has built-in technology that can always be listening while using almost no power. Its Sense Audio software can detect where you are by listening to your environment, and selectively remove ugly background noise. Intel’s Edison chipset can be used in similar ways, which Intel found a particularly cool way to demonstrate earlier this year.The company worked with BMW to build a connected helmet, which lets you converse with your (similarly connected) bike in plain English. They’ve taken that most inscrutable of interfaces—the Check Engine light—and forced it to explain itself. It can hear you, and speak back, all over the din of the open road.It’s just a prototype, they say over and over as they show it to me in a tiny conference room inside Intel’s headquarters. You know, in case the exposed circuitry or impossibly complicated setup wasn’t a tip-off. But it works. You say “Hello, smartbike” into the front of the modified gray HJC helmet, and a flat British male voice responds: “Hello.” Now you go. Say “Bike status,” and you get a quick audio rundown of your R1200GS’s vitals. Ask for “Range,” and you get an exact fuel count remaining. It can give you directions, and divert you to a gas station if you don’t have enough fuel to get there, or even warn you when you’re coming to a blind turn. You never need to look at your phone, or even a heads-up display; everything happens through two earpieces, and a microphone right next to your mouth.Microphones will keep getting better, simpler, and smarter. That’s just Moore’s Law. But they’ll always work the way they always have. Meanwhile, entirely new technologies are threatening to make much larger jumps in how we chat with our gadgets. Both are old technologies, with proven science behind them, that are only now starting to make their way onto store shelves.“The headphone business is a 2 billion dollar-plus business,” says Bruce Borenstein, CEO of Syracuse, NY-based headphone maker AfterShokz, “but everyone’s doing it the same way.” AfterShokz was founded four years ago, when a friend showed Borenstein a set of headphones that use vibrations to pass sound directly to your cochlea, instead of pumping through your ear canal. It’s called bone conduction, and it’s been around for decades, mostly in military environments. But Borenstein figured he could sell it to regular people instead. Then they stop being computers, and start becoming something much more. Aftershokz’s headphones don’t go over or in your ear; they rest instead on your cheekbone, just in front of your ear. That means you can hear your music, and still hear the world around you. “I immediately saw this as a great solution for sports headphones,” Borenstein says. “Particularly people who do things outside.” He’d seen cyclists riding with one earphone in and one out, or people dangerously unaware of the world around them.
In the Future, How Will We Talk to Our Technology?The Aftershokz Trekz let you hear music, and the world. They probably don’t make you awesome at pushups. Aftershokz
Borenstein found a partner in China that was making a product called “Bone Sonic.” That sounds like a toothbrush, so he renamed it Aftershokz, and started selling the product to athletes and outdoors enthusiasts. After a few rounds of upgrades and an overhaul of the sound spectrum that makes them actually sound good, the Aftershokz started selling like crazy. Borenstein wears his Bluez 2, the latest model of Aftershokz headphones, all day, and the company’s new model, called Trekz, has already raised more than eight times its Indiegogo funding goal.Bone conduction can help gadgets hear you better—vibrations aren’t prone to the same negative effects of background noise as standard headphones—and make it safer to wear a microphone on your head all day. But we still haven’t solved the loud-asshole problem. That’s where subvocalization comes in.Think about when you’re reading a book, and you find yourself mouthing the words you’re reading. That’s subvocalization—forming words without saying them out loud. Your nerves and brain process the words just the same, and all you need is a different way to collect the data about what you’re saying. One of the best examples of how this might play out is in the Ender’s Game series. In the books, Ender has a device called The Jewel, an implant connected to his jawbone. When he’s wearing it, he can talk to his computer just by moving his mouth and tongue muscles, without making a sound.But it’s not just science-fiction. Subvocalization is very real. In 2004, NASA ran a study and discovered that “small, button-sized sensors, stuck under the chin and on either side of the “Adam’s apple,” could gather nerve signals, and send them to a processor and then to a computer program that translates them into words.” Their goal, they said, was to use sub-vocal communication “in spacesuits, in noisy places like airport towers to capture air-traffic controller commands, or even in traditional voice-recognition programs to increase accuracy.” Maybe talking to your computer eventually won’t involve talking out loud at all.That’s all still in the research phase, and real consumer applications are much further out. Most of the devices that do exist are nests of wires clipped to your face, making you look like a science experiment. But it’s something lots of researchers and scientists believe will be important to the way we communicate soon.But again, McQuivey comes back to Her. “They could have conversations without anyone else even knowing,” he says, referring to the movie’s two protagonists—one human, one AI. “You get to that point, where you can have essentially an internal dialog with an external entity, and that’s where we start to co-evolve ” We’ll merge with the machine, he says, become one and the same, the line blurred between our thoughts and our computer.Before that can happen, though, our computers need way more clues about what we’re doing, saying, and thinking. That’s coming too.At the Consumer Electronics Show in January of 2014, not yet a month after Her hit theaters, Intel’s New Devices group showed off a prototype of an earpiece named Jarvis. A man identified only as “Larry” wrapped the earpiece around his right ear, and in about 60 seconds, had made a reservation, let a colleague know that he’d be missing his meeting, and ignored three messages from his wife.For the sake of the demo, Larry talked boisterously. But he could have just nodded his head, or shaken it no, to answer Jarvis’s questions. The headset uses motion sensors to read body language and simple gestures, to make interactions even faster. And with Intel’s RealSense cameras, your computer could read your emotions as you talk and make even more informed decisions. Their dream is to have fully multi-modal, multi-input conversations, with as many social and emotional cues as we have when we talk face-to-face. Then they stop being computers, and start becoming something much more. A little scarier? Maybe. But a lot more personal, too.Go Back to Top. Skip To: Start of Article.
In the Future, How Will We Talk to Our Technology?In the Future, How Will We Talk to Our Technology?
In the Future, How Will We Talk to Our Technology?

Read more http://www.wired.com/2015/09/future-will-talk-technology/


Strict Standards: Only variables should be assigned by reference in /home/noahjames7/public_html/modules/mod_flexi_customcode/tmpl/default.php on line 24

Strict Standards: Non-static method modFlexiCustomCode::parsePHPviaFile() should not be called statically in /home/noahjames7/public_html/modules/mod_flexi_customcode/tmpl/default.php on line 54

Find out more by searching for it!

Custom Search







Strict Standards: Non-static method modBtFloaterHelper::fetchHead() should not be called statically in /home/noahjames7/public_html/modules/mod_bt_floater/mod_bt_floater.php on line 21