MIT’s The Engine raises $200M to fund ‘tough tech’

If you think that too many VCs these days are playing it safe and only looking to back sure bets rather than moonshots and more ambitious ventures, take a look over at what’s going on in Boston. The Engine, based out of MIT and Cambridge, has raised a $200 million fund to back and help incubate startups working on “tough tech” — new challenges in areas like aerospace, advanced materials, biotech, genetic engineering and renewable energy.

In addition to the new fund, The Engine is announcing its first seven investments:

  • Analytical Space: systems that provide no-delay, high-speed data from space, to address global challenges such as precision agriculture, climate monitoring, and city planning.
  • Baseload Renewables: ultra low-cost energy storage to replace fossil baseload generation with renewable energy to successfully reduce carbon on a global level.
  • C2Sense: a digital olfactory sensor for industrial use cases such as food, agriculture, and worker safety, and transforming smell into real-time data that can be accessed remotely.
  • iSee, delivering the next generation of humanistic artificial intelligence technology for human and robotic collaborations, including autonomous vehicles.
  • Kytopen, accelerating the development of genetically engineered cells by developing technology that modifies microorganisms 10,000 times faster than current state-of-the-art methods.
  • Suono Bio, enabling ultrasonic targeted delivery of therapeutics and macromolecules across tissues without the need for reformulation or encapsulation.
  • Via Separations, developing a materials technology for industrial separation processes that uses 10 times less energy than traditional methods.

The idea is to bring more funding to areas that are sometimes considered risky ventures by more established firms that have focused more on software, said Katie Rae, CEO and managing partner of The Engine.

“When you think of tough tech, you have to think of things in the physical world,” said Rae. “Often it’s a combination of software and hardware, and a longer time to market.”

Startups can be from anywhere — not just MIT, or even Boston — but if they take money from The Engine, they need to move themselves to the city, Rae said. There, they have the option of taking digs at The Engine itself. The Engine, she said, plans to build “clusters” — which sound like labs — around specific areas like energy and biomedicine to help provide facilities to the startups.

Rae said that about $25 million of the funding comes from MIT, with the rest from family offices and other funds.

MIT is not the only academic institution that is looking to leverage its expertise and alumni network to build out new companies and hopefully get a tidy return out of the effort. UC Berkeley is launching SkyDeck, an accelerator that has raised a fund of about $20 million, backing startups that have at least one founder with a tie to the university.

Featured Image: Andrew Hitchcock/Flickr UNDER A CC BY 2.0 LICENSE

Music video AR app shuts down, the app that let users insert themselves into their favorite music videos, is shutting down. In a Medium post, co-founder and chief executive officer David Hyman wrote that the startup, which launched as Chosen three years ago and raised at least $10 million, decided to close up shop after a potential acquisition fell through.

Chosen started out as a talent competition app that eventually focused mostly on music. Though it had a marketing partnership with Ellen DeGeneres, who promoted the app on her show and also took an equity stake, Chosen struggled to compete with Earlier this year, Chosen pivoted and renamed itself after figuring out how to create greenscreen effects without the need for a full studio, making it possible for smartphone users to insert themselves in front of live video backgrounds.

What set’s features apart from other background effects, like those in Apple’s Photo Booths, was that worked with non-stabilized, moving cameras. As TechCrunch’s earlier coverage explainedm “The team’s patent-pending algorithm (they wrote a white paper going into more detail) essentially combines old-school chroma keying with new technologies, like object class detection, edge detection, color manipulation and other computer vision technologies. In short, they dynamically prioritize and combine these different techniques depending on the environment in which the video is being recorded.”

The app, which Hyman, the former CEO of Beats Music, says acquired just over one million downloads, partnered with and raised additional funding from the startup’s existing investors, but struggled to get other venture capital firms on board. In the end, the startup decided to put itself up for sale. Hyman says reached the final stages of talks with an acquirer, but that deal failed to come to fruition.

Titled “My mobile AR startup died so yours doesn’t have to,” Hyman’s blog post contains lots of insight for other companies and is well worth a read.” TechCrunch has contacted the team to ask what they plan to do next.

Crunch Report | Slack Raises $250 Million


You are about to activate our Facebook Messenger news bot. Once subscribed, the bot will send you a digest of trending stories once a day. You can also customize the types of stories it sends you.

Click on the button below to subscribe and wait for a new Facebook message from the TC Messenger news bot.


TC Team

Adan Medical is bringing technology to Epi-Pens

Going into anaphylactic shock is terrifying. The last time I went into shock, called my mom first, injected myself with an Epi-Pen second, injected myself with a shot of epinephrine third, and 911 last. I had that a bit backwards.

Had I been using Adan Medical‘s smart case for Epi-Pens, my mom and emergency response teams would’ve been notified the moment I opened the case. I still would’ve ended up in the emergency room, but I would’ve received help a bit faster. With these types of allergies, every second can be critical.

Adan Medical’s smart case and app also tells allergy sufferers if they’ve left their Epi-Pen behind, like at home, work or a friends house, for example. With Adan Medical’s app, those who suffer from severe allergies can also keep track of when their Epi-Pens are set to expire and the condition they’re in.

  1. adan medical

  2. adan medical

  3. IMG_5679

Adan Medical recently finished clinical trial around its smart case, which it expects to sell for about $70. The trial consisted of about 100 people and looked at whether or not the smart case provided piece of mind for the allergy sufferers, Adan Medical COO Francesc Trias Puig-Sureda told me at TechCrunch Disrupt SF 2017. The results of that trial, Puig-Sureda said, will be published in about a month. At that point, Adan Medical will be able to continue its talks with Mylan, the makers of Epi-Pen.

Epi-Pens are still pretty costly in the U.S. Last month, I picked up a two-pack listed at $375. Two-packs are recommended because depending on the severity of the reaction, one Epi-Pen might not be enough to save your life. Thankfully, my insurance saved me $315. Thanks Tim Armstrong!

But Epi-Pens weren’t always that expensive. From 2007 to 2014, the cost of an out-of-pocket Epi-Pen went from $94 to $609, according to JAMA Internal Medicine. In 2016, Mylan responded to the public outcry of the cost and released a $300 generic Epi-Pen.

Recognizing that the regulatory process in the U.S. can be “quite slow,” Puig-Sureda said, Adan Medical is not clear on when it will be able to launch in the U.S. Before it starts selling its medical devices in the U.S., Adan Medical will work to get approved by the Food and Drug Administration. Given that the regulatory process is different in Europe, Adan Medical expects to sell its smart case directly to consumers in about six months.

Adan Medical is also making a box for public spaces, similar to how defibrillators are available at restaurants. The company is still developing the box, but Puig-Sureda said it could roughly cost about $300. The idea is that a school, restaurant, airport or mall would be able to have one of these boxes that would safely store Epi-Pens.

The promise of this box is that in the event someone does forget their Epi-Pen, goes out to eat at a restaurant and proceeds to have an allergic reaction, they can rest assured knowing that the restaurant has non-expired Epi-Pens on hand for both adults and children. This smart box is a bit down the road, but Adan Medical plans to run a study in several schools starting next school year.

In the meantime, I’m going to try to just remember to bring my Epi-Pens with me wherever I go, remain diligent about ensuring I’m not carrying around ones that are expired (I’ve done this before) and if I’m going into anaphylactic shock, inject myself with the Epi-Pen, call 911 and then call my mom.

Snap, Facebook, Pop: Sriram Krishnan joins Twitter as senior director of product

In the ever-moving musical chairs that is the tech industry, Twitter has added a new person to its product team. Sriram Krishnan is now senior director of product at the social media platform. He announced the news himself, appropriately on Twitter, with a hearty endorsement from CEO Jack Dorsey. He will be reporting to Keith Coleman, who is Twitter’s new-ish VP of Product (he joined in December).

You may know Sriram from Tech Twitter — he’s a regular fixture there, pontificating, debating and snarking in 140 characters or less. But he’s also had an interesting career across some of the biggest names in the Valley, most recently at Snap, and before that at Facebook, Yahoo and Microsoft (in order of most recent employers), in areas like product and advertising.

It’s unclear what he will be working on specifically at Twitter, but there are plenty of issues to fix. When the company reported its earnings last quarter, the company showed continued stagnant growth and sent its stock into a nosedive. While Twitter has a cadre of very dedicated users, it’s proven a challenge for the company to find the right combination of services and user experience to attract and keep a wider audience, to help drive its bigger strategy of monetising through advertising.

Krishnan will start October 2.

so…I have some news.

— Sriram Krishnan (@sriramk) September 19, 2017

Pokémon Go creator’s next game will incorporate audio into the AR experience

Niantic, makers of the hugely popular mobile game Pokémon Go, is developing a new AR title that will incorporate audio cues into the gaming experience, according to company CTO Phil Keslin. The executive had just exited the stage at TechCrunch Disrupt San Francisco, where he had spoken on a panel about the upcoming wave of AR applications, and the potential for the AR market in the wake of new developer tools from Apple and Google.

On the panel, Keslin had talked about one of the problems with AR – that it’s somewhat awkward to hold your phone up, the way you with AR games like Pokémon Go.

“I can tell you from experience that people don’t do this,” he said, mimicking how people playing an AR game would hold their phones. “It’s very unnatural. It makes them look like a total doofus if they’re doing it for an extended period of time,” he added.

Pokemon Go CTO says people do not like pointing their phones for AR.

“No one walks around like this”

— TechCrunch (@TechCrunch) September 19, 2017

“In Pokémon Go, the only time they really use it is to share their encounter with the Pokémon. To take that one picture, which is natural….Everybody takes a picture, and then they’re done. It’s not walking around the world with the phone in front of their face,” Keslin said.

However, he did seem intrigued by the way that audio could be integrated into AR experience, saying that, “audio is different. You can hide that.”

Most people today walk around with their audio earbuds stuck in their ears all the time, he noted. “Nobody knows that they’re being augmented then.”

  1. phil-keslin-147A2876

    Phil Keslin (Niantic Inc.) at TechCrunch Disrupt SF 2017
  2. Cyan Banister (Founders Fund), Ross Finman and Diana Hu (Escher Reality), Phil Keslin (Niantic Inc.) and Nathan Kong (The CurioPets Company)147A2871

    AR panel with Phil Keslin (Niantic Inc.), Nathan Kong (The CurioPets Company), Ross Finman and Diana Hu (Escher Reality), and Cyan Banister (Founders Fund) at TechCrunch Disrupt SF 2017
  3. phil-keslin-147A2887

    Phil Keslin (Niantic Inc.) at TechCrunch Disrupt SF 2017

We followed up with him after the panel, where he explained that audio was something that Niantic had toyed with back when they were building Ingress, the location-based, augmented reality game that was something of a precursor to Pokémon Go.

Though not all the features made it to the final product, the team had thought about using audio in a variety of ways in Ingress, from telling you which location to visit, or even have your phone call you when you reached a waypoint to give you another clue. Another possibility was combining audio with the phone’s sensors, like an accelerometer, to know what a person was doing.

Perhaps the game would tell you to go left, look around, but “don’t look up!”, for instance.

This bigger idea that audio could enhance an AR experience will come into play in the future Niantic title.

Asked first if audio clues would ever come to Pokémon Go, Keslin told us: “Maybe. Or maybe we’d use it in other games,” he said with a smile. “We’re not a one-game wonder.”

Keslin wouldn’t talk about the new game in detail, for obvious reasons, but did confirm it’s under active development. (Can we play it next year? “Maybe,” he said. His favorite answer!)

However, Keslin would speak to his thoughts on audio in general.

“I think audio is significant. It’s one of our senses. It’s one of things that really drives us. I want to look at ways to incorporate audio in future titles,” Keslin told us. “AR is not just visual.”

BeeLine’s simple navigation device keeps cyclists headed in the right direction

Lets say you’re roaming the city on your bike. You know your final destination — but you don’t necessarily care how you get there, or if your route is the shortest possible one. You just want to ride.

That’s the idea behind BeeLine, a “fuzzy navigation” device for cyclists. You tell it where you want to go, and it constantly points you in that direction — but it doesn’t try to tell you exactly when and where to turn. The company showed off its product today in the TechCrunch Startup Battlefield at Disrupt SF 2017.

BeeLine didn’t know they’d be competing on the main stage when they came to Disrupt. They came to be a part of the Startup Alley exhibition room, where they were voted into the “wildcard” slot by attendees roaming the hall.

BeeLine latches onto your bike’s handlebars, a clever wrap design letting you put it on and take it off within a few seconds. When you’re not on your bike, you can pull off the BeeLine and attach it to your keyring to keep it out of the reach of would-be thieves. The built-in battery should last about 30 hours of use. It’s got an ePaper display, and a built-in accelerometer, digital compass, and gyroscope.

BeeLine costs $129, and works with both iOS and Android. You use the smartphone companion app to set your destination — or, if you’re making multiple stops (or want to take a specific path), your route.

BeeLine started its life on Kickstarter, with the London-based team raising £150,185 (roughly $200,000) after setting out to raise £60,000. It officially hit the shelves in February of this year, and they’ve sold about 8,000 units to date.

So what sparked this idea? BeeLine co-founders Tom Putnam and Mark Jenner were meeting at a coffee shop to discuss potential ideas… but Mark got lost on his bike along the way.

Battlefield Judge Q&A

How much does it cost to make?

About £35.

Whats your advantage over an iPhone app that does something like this?

This way you don’t have to strap an expensive phone to your handlebars. We’re also able to make it weather resistant.

What’s the battery life?

About 30 hours when it’s being used.

Steve Jurvetson on why he couldn’t join the board of secretive Zoox

Today at our TC Disrupt event in San Francisco, we had the chance to catch up with investor Steve Jurvetson about a wide number of things that are sweeping across the startup landscape (and might fundamentally change it), from ICOs to Softbank’s giant Vision Fund to AI to Elon Musk’s new Boring Company.

Jurvetson had plenty of interesting insights about all, unsurprisingly. He somewhat famously graduated from Stanford in 2.5 years, at the top of his class, and has led early investments in many pioneering companies, from Hotmail to SpaceX and Tesla. (He sits on the boards of the last two, alongside Musk.)

Another board seat Jurvetson planned to take but didn’t, he said today, was with Zoox, the three-year-old, Menlo Park, Ca.-based startup that’s building self-driving cars from the ground up with an eye toward picking up passengers as a service.

A year ago, Zoox raised what Jurvetson characterized as the biggest round of Series A funding ever when it closed on $240 million, including from DFJ, Lux Capital, Blackbird Ventures and others.

So much money makes sense, argued Jurvetson. “It’s a capital-intensive business. If you’re going to operate a fully autonomous driving service in an urban environment – imagine an Uber- or Lyft-like service without humans in the loop — that is a big innovation stream.”

What Jurvetson didn’t anticipate when his firm, DFJ, wrote a check to Zoox —  “before they had a board, before they really had any structure whatsoever, before the Series A” —  was that Musk would make plans to jump into the car-as-a-service business, too.

Though “[t]here was no whisper of Tesla being competitive” early on, said Jurvetson, that changed abruptly on a Tesla shareholder call last October, when Musk told analysts and investors that he was capable of create a car-sharing network so compelling that customers would abandon Uber and other ride-sharing companies to adopt it.

Jurvetson said he may have unwittingly stirred Musk’s competitive juices by recounting, to a crowd in 2015, a conversation he’d had with then Uber CEO Travis Kalanick about Uber potentially buying Tesla’s autonomous cars and operating them in the service of Uber.

“I thought that was such a typical braggadocio, typical Travis kind of statement,” he joked on stage today. But it was “picked up by the press, and a Morgan Stanley analyst asked Elon about it, and he was like, ‘Well, of course, we might someday do that, too. We might be in that business.’”

Long story short, with Musk broadening his vision for Tesla, Jurvetson wasn’t able to take the seat with Zoox after all, with DFJ instead installing longtime partner Heidi Roizen in his stead. And while it’s easy to imagine that DFJ’s respective stakes might make both Zoox and Tesla nervous about what kind of information is being shared internally, Jurvetson insisted on stage that there’s a “Chinese wall,” meaning that he and Roizen never discuss the companies or their respective strategies.

By the way, we asked Jurvetson if he could confirm reports that Zoox is in talks with Softbank about a huge round that would value it at between $3 billion and $4 billion. Jurvetson declined to do that, saying “they haven’t finished that” (meaning that new round).

Notably, he did say separately that a DFJ company that he declined to name “went to Japan and sat down with [Softbank founder] Masayoshi Son” in what sounded like the not-too-distant past. The team was looking for between $50 million and $100 million, Jurvetson said. Instead, they were handed a written term sheet for $800 million — 45 minutes into the sit-down.

Facebook design head loud on voice, silent on Alexa and hardware

Facebook has been reportedly working on a video chat device and a smart speaker that would use a new voice interface, to compete against the likes of Amazon and Google in the race to control your living room; and it’s also been testing as a voice search feature for its app. Today, the company’s head of design, Luke Woods, came out bullish on the promise of voice commands, squirmed when we asked him about Alexa apps, and simply shut down with a series of no-comments on the subject of hardware that used a voice interface.

During an interesting session at TechCrunch Disrupt that ranged across topics like copying Snap (“In design school, we learned that there is nothing new anymore.”); solving hard problems (“We treat edge cases as stress cases.”) and the push to keep changing and developing design at Facebook (“There is a tendency to think we’ve got it all figured out, but that couldn’t be farther from the truth.”), Woods was asked for his thoughts about new interfaces like voice.

He cautioned it was still something that doesn’t have the perfect product end point yet, but services like voice and voice search are promising.

“What’s interesting about these [newer] areas is that they hold a huge amount of promise…these are some of the problems that people are most excited to solve,” he said. “We don’t know what form [AR and VR] are going to take at the end or what’s going to work.”

Woods continued by describing voice search as “very promising. There are lots of exciting things happening…. I love to be able to talk to the car to navigate to a particular place. That’s one of many potential use cases.”

Voice navigation for Facebook is another thing he touched on, but for a particular group of users: the company has already built voice search and voice descriptors for people who have impaired vision, to be able to interact with the social network, and navigate to different features.

Then things got a little less direct. “How would Facebook on Alexa work?,” interviewer Josh Constine asked. “That’s a very interesting question!” Woods said, smiling. “No comment.”

When asked how Facebook would work on Alexa, FB’s head of design responds with a smile and no comment

— TechCrunch (@TechCrunch) September 18, 2017

Later, on the sidelines of the stage, Woods also declined to comment on whether Facebook is working on hardware of any kind that could use voice services, with another smile.

The lack of response is at odds with some of the other evidence that has been piling up around Facebook and what it may be looking to launch in coming months and years. In the range of tests that we and others have spotted Facebook running in different countries, what’s notable is the profusion of voice-based services that are coming out.

In July, a report emerged out of Taiwan that the company was building a speaker with a screen that would let users message each other and browse photos — which sounded somewhat like Facebook equivalent of the Alexa Show. The speaker, designed in the company’s secretive Building 8 hardware lab, was supposedly getting made for a Q1 2018 launch.

Just a week later, Bloomberg appeared to corroborate the report out of Taiwan with another interesting twist: in addition to the speaker with a screen, there was another standalone speaker with no screen, akin to the Echo and Google’s Home hub.

Then, late in August, BusinessInsider discovered that the speaker device was being built under the codename Aloha, with a launch date of May 2018.

The voice noise did not end there. Earlier this month, Facebook code super sleuth Matt Navarra noted that Facebook’s iOS app contained a large amount of code related to voice search. These appear to be different from the accessibility voice features that Facebook has built.

  1. voicesearch code

  2. voice search screenshot

Facebook has made at least one interesting acquisition in speech and natural language recognition — the AI and natural language startup, whose team helped build Facebook’s bot strategy for its Messenger app. But in addition to that, it’s also been looking to hire engineers with expertise in speech and natural language processing (see here, here, here and here).

These jobs mention other speech-based services that Facebook is working on, such as video captioning services for videos that are played on silent, but the expertise has wide applicability beyond that.

Here’s the full video of the conversation:

Looxid Labs is combining brain waves and VR to build an analytics super engine

Virtual reality is having a tough time breaching consumer markets and it’s not clear where the demand is exactly. But when it comes to research and analytics, gathering emotional responses could be huge to custom tailoring content and gaining important insights.

Looxid Labs, launching onstage today at TechCrunch Disrupt Battlefield, wants to bring brain waves into the VR equation.

There are a few companies that are doing the whole track your emotions in VR thing, but the products from VR unicorn MindMaze and demos from Samsung rely on tracking facial muscles and inferring overall emotions while others are using computer vision sensors to track your lips. Meanwhile Looxid Labs is using EEG partnered with eye-tracking to actually track your emotional responses using its proprietary algorithms that infer how you’re feeling.

EEG tech always seems to be a bit nebulous in what it can actually deliver in a product, but Looxid Labs isn’t rushing to consumer markets, instead they’re focusing heavily on research and analytics applications that can track insights based on your emotional reactions to VR experiences.

The startup’s product, LooxidVR, is a system to interpret all the sensory input it’s gathering. Their research kit and analytics product will allow tons of industries that have already shown interest in VR to more capably capture user reactions. For medical use cases surrounding pain management or physical therapy, tracking emotions has some readily apparent use cases. Detecting traits like confusion could be really important to educational and training apps, preventing important concepts from escaping users.

The close integration with the headset allows you to pinpoint what objects a subject was directing their attention towards and what the corresponding emotion was. The insights delivered are measure across three scales, interpreting where your current emotional state falls on happiness/sadness, dominance/submissiveness and excitement/depression.

Looxid wants to embrace the consumer market, but it isn’t focusing on a major entry in the near future. They’re working on a developer kit targeting early adopters in the consumer space. Emotion-tracking has been something a lot of social VR applications have been interested in, but the current hardware doesn’t have an integrated solution. An EEG cap might not make the most sense as a mass market option, but Looxid has integrated a less robust system into the head strap of the developer kit.

Looxid has some challenges ahead of it if it wants to approach the consumer market heavily. Integrating new VR input tech with OEMs can be incredibly difficult and startups can sink a lot of money into trying to build of a developer ecosystem large enough to get traction early-on. There is less risk — and less reward — in ignoring consumers and focusing on research but the needs of that community align a lot more with where Looxid Labs is showing its strengths.

  1. tcdisrupt_sf17_looxid-2948

    Looxid presents at Startup Battlefield at TechCrunch Disrupt SF 2017
  2. tcdisrupt_sf17_looxid-2945

    Looxid presents at Startup Battlefield at TechCrunch Disrupt SF 2017
  3. tcdisrupt_sf17_looxid-2952

    Looxid presents at Startup Battlefield at TechCrunch Disrupt SF 2017
  4. tcdisrupt_sf17_looxid-2950

    Looxid presents at Startup Battlefield at TechCrunch Disrupt SF 2017
  5. tcdisrupt_sf17_looxid-2959

    Looxid presents at Startup Battlefield at TechCrunch Disrupt SF 2017
  6. tcdisrupt_sf17_looxid-2956

    Looxid presents at Startup Battlefield at TechCrunch Disrupt SF 2017