Leaderless Bitcoin Struggles to Make Its Most Crucial Decision

Bitcoin’s most influential developer has proposed a controversial fix that would help it handle more transactions.

By Mike Orcutt on May 19, 2015


No one person or entity controls Bitcoin’s development.
In a test of Bitcoin’s ability to adapt to its own growing popularity, the Bitcoin community is facing a dilemma: how to change Bitcoin’s core software so that the growing volume of transactions doesn’t overwhelm the network. Some fear that the network, as it’s currently designed, could become overwhelmed as early as next year.

The answer will help determine the form Bitcoin’s network takes as it matures. But the loose-knit community of Bitcoin users is not in agreement over how it should proceed, and the nature of Bitcoin, a technology neither owned nor controlled by any one person or entity, could make the impending decision-making process challenging. At the very least it represents a cloud of uncertainty hanging over Bitcoin’s long-term future.

The technical problem, which most agree is solvable, is that Bitcoin’s network now has a fixed capacity for transactions. Before he or she disappeared, Bitcoin’s mysterious creator, Satoshi Nakamoto, limited the size of a “block,” or group of transactions, to one megabyte. The technology underlying Bitcoin works because a network of thousands of computers contribute the computational power needed to confirm every transaction and record them all in a permanent, publicly accessible ledger called the blockchain (see “What Bitcoin Is and Why It Matters”). Every 10 minutes, an operator of one of those computers wins the chance to add a new block to the chain and receives freshly minted bitcoins as a reward. That process is called mining.

Under the one-megabyte-per-block limit, the network can process only about three transactions per second. If Bitcoin becomes a mainstream payment system, or even a platform for all kinds of other online business besides payments (see “Why Bitcoin Could Be Much More Than a Currency”), it’s going to have to process a lot more. Visa, by comparison, says its network can process more than 24,000 transactions per second.

The developers in charge of maintaining Bitcoin’s core software have been aware of this impending problem for a while. Gavin Andresen, who has led work on Bitcoin’s core code since Nakamoto handed him the reins in 2010, told MIT Technology Review last August that his favored solution to the problem is to increase the maximum block size (see “The Man Who Really Built Bitcoin”). Earlier this month, Andresen got more specific, proposing that the maximum block size be increased to 20 megabytes starting in March 2016, calling this the “simplest possible set of changes that will work.” In a subsequent post on his blog, Andresen called the need for the change “urgent,” noting that the network would likely become unreliable if it were allowed to reach the current limit.

Mike Hearn, a former Google software engineer who has contributed to Bitcoin’s development, has calculated that at the current rate of transaction growth, the limit will be hit sometime in 2016. “Because upgrades take time, we need to prepare for this now,” Hearn writes in his own recent post on the issue.

The problem is that a consensus is required to make a change as consequential as the one Andresen suggests, which would substantially alter the requirements for mining. And not everyone in the community of users of the Bitcoin software—which includes miners, developers, and a growing number of startups—agrees that Andresen’s proposal is the best path forward.

A popular argument against the change is that it would favor bigger, richer mining operations that could afford the increased costs that go along with processing and storing bigger blocks. That could lead to a dangerous “centralization” within the mining community, says Arvind Narayanan, a professor of computer science at Princeton University (see “Rise of Powerful Mining Pools Forces Rethink of Bitcoin’s Design”). Another, more ideological argument is that Bitcoin was never supposed to change this drastically from Nakamoto’s original design. Some even argue that the limit doesn’t need to increase at all, as long as the developers make smaller adjustments to prevent the network from buckling when it reaches it—though that could make it more expensive to get transactions confirmed without delays.
The growing commercial ecosystem around Bitcoin is at stake. If the limit remains fixed, businesses hoping to store lots of transactions on the blockchain could be out of luck. And such interest is only increasing—earlier this month, the Nasdaq stock exchange said it was testing Bitcoin’s blockchain for transactions in its private market subsidiary. If the test is successful, the exchange says, it could use the technology for all Nasdaq trades in the public market.

Will Bitcoin be able to handle that? Pieter Wuille, another of Bitcoin’s five core developers, says right now there are just too many unknowns about the consequences of increasing the block size to 20 megabytes. In addition to significantly raising the cost of validating transactions, which could force out smaller players, he says, there may be “things we don’t even know of that could break.” Wuille is in favor of increasing the block size “in general,” but says a smaller increase at first would be less risky.

For now, the debate will continue to play out on the Bitcoin development mailing list, a forum that includes the core developers as well as the many others who contribute code.

Ultimately, though, the decision-making process “really comes down to how the core developers feel about it,” says Narayanan, since they are the only ones with the power to change the code. Complicating things even further is the fact that it’s not exactly clear how they would solicit input from all the stakeholders, many of whom may prefer to remain anonymous. The core developers could eventually find it necessary to take matters into their own hands.

At least one of them thinks that would be a bad idea, though. That would set an “incredibly dangerous precedent,” says Wuille.


Uber Tests Taking Even More From Its Drivers With 30% Commission

Ride-hailing service Uber is testing, again, to see whether new drivers are willing to do the same job as others for less pay.

Uber is bumping up some drivers’ commission to 30%, its highest level ever, the company told FORBES Monday. In a new pilot program in San Francisco, a small percentage of new UberX drivers will pay a 30% commission on their first 20 rides in a week, 25% on their next 20 rides, and then 20% on any rides beyond that. Uber is also testing the same commission in San Diego, except that the tiers are for the first 15 and next 15 rides in a week.

The tiered structure, which Uber began testing in April, will hit new, part-time drivers the hardest. Those who only work a few hours a week will never see a 20% commission, which used to be standard for UberX, the company’s product where people use their personal cars for work. The new system rewards drivers who work more per week, and full-time drivers will likely reach the top commission tier in a couple days, but even the most dedicated get dinged on their first 30 or 40 rides.

An Uber email sent to a driver explaining the new tiered commission structure that is being tested on some drivers.
An Uber email sent to a driver explaining the new tiered commission structure that is being tested on some drivers.

Uber has already been studying the effects of taking a higher commission. In September, the company began taking 25% from all new drivers in San Francisco. Exploring a 30% commission suggests that no matter how low the price goes, new drivers are still willing to sign up. Uber often tests or debuts new products like its carpool, UberPOOL, in San Francisco, its hometown. Uber has yet to decide how long this pilot program will last or whether it will spread to other new drivers or markets, the company said.

Uber and rival service Lyft make most of their money on driver commission, and both began with a standard 20% commission. Both have also flirted with taking less commission — Uber down to 5%, and Lyft down to zero — at times when they wanted to lower rider fares but keep paying drivers so they wouldn’t quit. But in both cases, those low-commission times ended after a few months.

After a period of several months during which it collected no commission, Lyft reintroduced driver commissions last August with a twist: If drivers worked 50 or more hours a week, they pay no commission on their earnings. If they drive between 30 and 50 hours, they pay a 10% commission.
With its new tiered commission structure, Uber is taking a page out of Lyft’s book — except that the drivers’s ultimate reward is getting to keep 80% of their fare, not 100%. It’s still better than 70%, which is what part-time UberX drivers will get under the new system. (Drivers in the test group can earn a little more than drivers who were onboarded with a 25% flat commission, if they take many rides a week.)

Uber has kept quiet about the new commission structure. Luke, a new driver who works in the San Francisco market, said he didn’t even know he was getting a worse deal than almost every other Uber driver out there until Uber texted him saying he was a few rides away from a new commission tier.

“I was confused — I thought it was 20%,” he said. “I started looking online and everywhere I saw said 20%, so I was like, ‘What’s going on?’”

Luke, who didn’t want to give his last name for fear of retaliation from Uber, said he didn’t remember seeing any alert about a new commission structure while signing up. But last week, a day after FORBES first asked about the new commission, he was shown an alert on his driver app that made him agree to the tiered commission before he was able to continue driving.

When drivers in the test group figured out they were being paid less than others, some said they would stop driving. “You’re almost being penalized for being a part-time driver,” said a San Diego driver who didn’t want to give his name and said he stopped driving when he found out about the high commission.

“You really can’t take advantage of that 20%,” he said. “You get irritated until you get to the point where you’re just used to it being 30%.”

The pilot program, which Uber spokeswoman Eva Behrend called “a limited test” that gives drivers “the opportunity to earn more based on the number of trips they drive,” will not affect drivers who joined before April. Uber’s commission for its premium products like black cars and SUVs ranges from 25% to 28%.

Follow me on Twitter at @ellenhuet, find more of my stories on Forbes and send me tips or feedback at ehuet at forbes.com.

Facebook’s Internet.Org Hits Global Flak

By David Talbot on May 18, 2015


The Internet is still unavailable to four billion people.
Brazil’s president, Dilma Rousseff, wears a hoodie adorned with the Facebook logo and Brazil’s flag, given to her by Mark Zuckerberg at a conference last month in Panama. She holds the power to allow or block Internet.org in Brazil.

In January Facebook founder Mark Zuckerberg clinched the first Latin American customer for Internet.org, which lets people use certain websites and apps without incurring data charges. Standing with Colombian president Juan Manuel Santos, he announced that the mobile carrier Tigo would provide “free basic services” through the app, which Zuckerberg argues is how the world’s poorest should get online.

Already, though, Internet.org is running into trouble in Colombia because of criticisms that are being echoed in many other countries. The opposition is adding up to a strong challenge to Zuckerberg’s vision of using Facebook as a central part of a strategy to introduce the Web to Internet newcomers.

Today 60 people from digital-rights groups in 28 countries or regions around the world signed a joint letter to Zuckerberg criticizing many of Internet.org’s practices on fairness, privacy, and security grounds. Among them are the Zimbabwe Human Rights NGO Forum, Pakistan’s Digital Rights Foundation, and similar groups in Brazil, Indonesia, Uganda, and Cameroon.

Also on the list is the Karisma Foundation, a digital-rights group based in Bogotá. It points out that Tigo is telling customers it will discontinue the free app on May 31. Tigo recently decided to offer a 60-day free trial of Facebook, which users are confusing with the Internet.org app that gives free trials to multiple services, says Carolina Botero, Karisma’s president. “We have done some informal inquiries in the neighborhoods and found that people don’t realize they are only on Facebook—not on the Internet,” she says. Colombia’s government is channeling government information through Facebook’s app rather than making it available directly, she adds. “This was presented as a project meant to be an important universalization of the Internet,” she says. “But contrary to transparency principles, we have no information on the contract with Tigo, or how it came about. It’s only a few apps which they choose—and we don’t even know why or how.”

The new controversy comes after a recent furor in which more than a million Indians signed a petition asking India’s telecom authority to ban the app (see “Indian Companies Turn Against Facebook’s Scheme for Broader Internet Access”). “It is our belief that Facebook is improperly defining net neutrality in public statements and building a walled garden where the world’s poorest people can only access a limited set of insecure websites and services,” the letter published today says. “In its present conception, Internet.org thereby violates the principles of net neutrality, threatening freedom of expression, equality of opportunity, security, privacy and innovation.”

The opposition isn’t to Facebook per se. The main Facebook app or website is very popular among people who can afford data plans. (After the United States, Facebook’s next-largest user bases are in India, Indonesia, and Brazil.) Rather, it is to Internet.org specifically.

The free system works like this: Users download the Internet.org app. Through it, they get a simple version of Facebook plus access to a collection of other apps—often stripped-down websites for weather, health, and jobs—that pass through a Facebook approval process. The local telecom company foots the bill, a process known as “zero rating.”

That is a business model Zuckerberg has explained as “free service with upsells.” In other words, get people interested with the free stuff, then charge them when they use more data—for example, if they want to download a photo that someone posted on Facebook. Zuckerberg frames the idea altruistically. “If someone can’t afford to pay for connectivity, it is always better to have some access than none at all,” he wrote recently.

A growing number of opponents argue that Facebook’s effort will create a de facto two-tier Internet—one tier curated by Facebook, and the other open to everything, for anyone who can afford it. But the joint letter also addresses issues of privacy and security. The groups worry that Facebook will make it easy for state-run telecoms to monitor users through this centralized system—and that the app could in some cases enable countries to spy on and repress their citizens. Adding to the concerns, Facebook is not supporting apps that use encryption.

A Facebook spokesman said in an e-mail this week that Facebook “doesn’t share user-level navigation information” with its partners or store it at all beyond 90 days. Meanwhile, many feature phones can’t handle encryption; Facebook says it is working fast to overcome this problem but did not offer a time line. (As for Colombia, the Facebook spokesman said the company was looking into Tigo’s May 31 deadline for the Internet.org app, and said that Tigo’s 60-day free Facebook offering has “nothing to do with Internet.org” even though the latter also includes free Facebook. Tigo has not responded to requests for comment.)

Facebook keeps adding more deals with carriers; Zuckerberg said in a post Wednesday that a new deal in Malawi brings the number of people with access to free Internet services through the app to a billion, at least in theory. (The number of people who have actually downloaded and used the app is nine million, according to Facebook.)
Facebook did not invent the concept of zero rating, which is in use in various ways around the world (see “Around the World, Net Neutrality Is Not a Reality”). But whether a Facebook-curated scheme is the best way to provide access is an open question. “It would be extremely dangerous if governments weigh in to favor one company or commercial model for expanding access,” says Carolina Rossini, a Brazilian lawyer who is vice president for international policy at Public Knowledge, a think tank in Washington, D.C.

Other models for free access are emerging. One of them is from Jana, a Boston startup, which is offering a service through carriers in 15 countries (see “Facebook’s Controversial Free App Plan Gets Competition”). Under that scheme, an app developer can underwrite a user’s cost of both downloading and using an app; users get a bonus of extra data to use for anything.

Many countries, like Brazil, have enacted laws that make strong commitments to universal access and support net neutrality, the principle that no set of applications should be favored over any other. Some countries, like Chile, expressly ban zero rating. But in most cases, the legal picture is ambiguous. Brazil, for example, has a strong universal-access law called the Marco Civil. Clearing up whether Facebook can operate there will require a stroke of the presidential pen asserting it one way or another.

No surprise then, that at the Summit of the Americas in Panama last month, Zuckerberg gave the president of Brazil, Dilma Rousseff, a hoodie adorned with Facebook’s logo and Brazil’s flag. The surprise, Rossini said, was that Rousseff gamely put it on and smiled for the press.

This story was updated on May 18, 2015, to clarify the description of Internet.org.

Preparing for the Cyber-Attack That Succeeds

May 13, 2015 | Provided by AIG

Cybercrime is on the rise. According to Symantec, more than 1 million people are victims of cyber-attacks every day, at a global annual cost to consumers of almost $113 billion.1 The cost to businesses is even greater. A recent study sponsored by McAfee, a subsidiary of Intel, put the global figure at more than $400 billion annually.2 And, of course, beyond the dollars, the cost in reputational damage, consumer confidence in the brand, and time to recovery can be enormous.

While major high-profile security breaches, such as those recently suffered by Target and Home Depot, make the biggest splashes in the news, the attacks are not limited to national and multinational companies. For example, the largest online breach targeting credit card data in Australia’s history occurred in December 2012, when criminals attacked 46 small and midsize businesses—the majority of which were service stations and individual retail outlets.3

The principal lesson to be learned is that companies of all sizes are vulnerable to cyber-attacks. Unfortunately, many don’t view themselves that way because they believe they are too small to be targeted. But from a risk-management perspective, that is exactly the wrong attitude to take.

Because of the devastating impact that a major breach can have—on both the top and bottom lines, on the brand, and along many other dimensions of the business—and because of the increasing likelihood that such an event may one day occur, it is prudent to rank cyberthreats as one of the three largest areas of exposure for essentially every business. As such, thwarting cyber attacks, as well as planning for how the company will respond in the event of a successful major breach, should be a C-suite-level concern, and not something relegated to the IT department and then promptly forgotten—until it’s too late.

An Ounce of Prevention

A first step in assessing your company’s exposure to cyberthreats is to conduct a thorough inventory of your data- collection and data-storage protocols. What kind of data do you have? How is it being protected? In addition, what does the threat environment look like for your company and industry? How frequently are your systems being attacked? Your competitors? According to The Wall Street Journal, immediately after Target made its data breach public, executives at Home Depot began conducting a threat assessment of their company’s exposure to a similar attack, and soon afterwards began implementing heightened security measures across the organization. Unfortunately, as we now know, hackers were able to infiltrate Home Depot’s systems before these steps could be fully put in place.4

Fortunately, the majority of attacks are not as sophisticated as those that struck those two major retailers. In fact, most cyberthreats do not target a specific company, and they can be stopped by the use of basic IT security measures, including up-to-date antivirus software and robust firewalls. However, as noted above, it is highly prudent to be prepared to defend against more dangerous efforts—and to think about what to do should a major breach occur.

Business Continuity and Risk Transfer

Cyber attacks: It’s not a matter of if but when
A key step is to build cyberthreats into your company’s business continuity plans, alongside other kinds of potential major disruptions. How would your business function if it suddenly lost access to critical data? What kinds of plans are currently in place for dealing with a major data breach? Running scenario-based drills to test the impact and response times to various types of breaches will aid in identifying where your company’s greatest weaknesses are, so that they can be adequately addressed. As Home Depot’s example demonstrates, it’s never too early to start.

There may still remain areas where, for various reasons, risk cannot be managed internally. In this case, the best decision may be to transfer the risk via a cyber-liability policy. These policies should be viewed as a supplement to, and not a replacement for, good risk management policies. But they can provide a vital source of liquidity in the days following a successful attack.

By taking cyberthreats seriously and building them into your business continuity plans and practices, your company will be better positioned to survive a major cyber-attack and get back to normal business operations quickly.

Why Google’s Self-Driving Bubble Cars Might Catch On

Google’s announcement Friday that it will test small, pod-style autonomous cars on public roads might seem surprising to anyone enthusiastic about—or just familiar with–conventional cars. The vehicles look cute but hardly impact-resistant, and they have a top speed of only 25 miles per hour.

But some experts suspect that the unconventional two-seater vehicle, known within Google as “Prototype,” represents a practical strategy to get fully autonomous cars into everyday use. Google still has significant work to do before its software can handle all the situations a human driver can. But it will be easier to build, test, and market small vehicles for limited environments than to craft autonomous cars that can handle everything from high-speed freeway driving to city streets, they say.

“There’s going to be an enormous market for small autonomous vehicles,” says Gary Silberg, an auto industry analyst at the consulting firm KPMG. He cites city centers, airports, campuses, and amusement parks as places where vehicles much like those Google is just starting to test could fit in. “From a market perspective, it’s a huge opportunity,” he says.

Google first unveiled its compact car design last year, in what seemed like a change in strategy from its effort to make conventional cars that were capable of driving themselves (see “Lazy Humans Shaped Google’s New Autonomous Car”). On Friday Google said that the new design will be unleashed on the roads of the company’s hometown of Mountain View, California, this summer. Eventually, up to 100 vehicles will roam the town’s suburban streets.

Prototype’s low top speed qualifies it for the less stringent vehicle safety standards the National Highway Traffic Safety Administration (NHTSA) applies to electric golf carts. The car must have lights, mirrors, and seat belts but is exempt from many of the crashworthiness standards and air-bag requirements of normal gas and electric vehicles. In Mountain View, it will be restricted to roads with a speed limit of 35 miles per hour or less.

That still gives the design a lot of scope as an urban taxi, says Bryant Walker Smith, an expert on autonomous vehicles at the University of South Carolina. “Almost the entire island of Manhattan between the expressways could be accommodated by a vehicle operating at 25 miles per hour,” he says. “Right there, you have several million people who could be serviced by a car like this.”

A slow, light car such as Prototype is also less apt to be involved in a catastrophic accident. Impacts are more likely to be gentle fender-benders rather than pile-ups, and there’s less potential to injure or kill pedestrians or cyclists. “A limited-environment low-speed vehicle will be technologically and socially viable sooner than a vehicle capable of operating anywhere,” says Smith.

But there are also drawbacks to focusing on such a limited vehicle. Tests of Prototype won’t give Google the experience with safety systems and crash tests it will need to design a more conventional autonomous car that can travel at higher speeds.

The Boston Consulting Group estimates that bringing full autonomy to market will cost car makers upward of $1 billion each over the next decade. That money will go to design prototypes, develop sensors and processing technologies, write integration software, and perform testing and validation. Apart from the navigation and decision-making algorithms, such systems are likely to be very different in a full-speed four-seater and a low-speed taxi.
Google also still has significant work to do on making its software capable of handling all road situations. The cars have recently gained some ability to cope with roadside joggers, police vehicles, and cyclists’ hand signals. But they still can’t reliably handle very rainy conditions or operate in areas that have not been mapped to centimeter-level accuracy.

The distinctive Prototype might help Google learn how other road users interact with fully autonomous vehicles. It is a Level 4 vehicle, an NHTSA classification applying to cars that require passengers to do nothing more than provide their destination.

At the moment, no one knows how people will react to a car with an empty driver’s seat. If people change their behavior in unpredictable ways, then Google’s software may have extra challenges. It can’t use social cues like eye contact or waving to other drivers and pedestrians as a human driver might to clear up a misunderstanding. However, California regulations currently require that all driverless test vehicles have a backup steering wheel, a brake pedal, and a safety driver ready to take over at all times.

Whatever Google’s ultimate aim, the company will continue to operate at least some of the 23 modified self-driving Lexus SUVs it already drives on public roads. They are able to drive anywhere Google has mapped in detail. In total, Google’s cars have covered 1.7 million miles around the Bay Area, including freeways and city and suburban streets.

Can We Identify Every Kind of Cell in the Body?

How many types of cells are there in the human body? Textbooks say a couple of hundred. But the true number is undoubtedly far larger.
Piece by piece, a new, more detailed catalogue of cell types is emerging from labs like that of Aviv Regev at the Broad Institute, in Cambridge, Massachusetts, which are applying recent advances in single-cell genomics to study individual cells at a speed and scale previously unthinkable.

The technology applied at the Broad uses fluidic systems to separate cells on microscopic conveyor belts and then submits them to detailed genetic analysis, at the rate of thousands per day. Scientists expect such technologies to find use in medical applications where small differences between cells have big consequences, including cell-based drug screens, stem-cell research, cancer treatment, and basic studies of how tissues develop.

Regev says she has been working with the new methods to classify cells in mouse retinas and human brain tumors, and she is finding cell types never seen before. “We don’t really know what we’re made of,” she says.

Other labs are racing to produce their own surveys and improve the underlying technology. Today a team led by Stephen Quake of Stanford University published its own survey of 466 individual brain cells, calling it “a first step” toward a comprehensive cellular atlas of the human brain.

Such surveys have only recently become possible, scientists say. “A couple of years ago, the challenge was to get any useful data from single cells,” says Sten Linnarsson, a single-cell biologist at the Karolinska Institute in Stockholm, Sweden. In March, Linnarsson’s group used the new techniques to map several thousand cells from a mouse’s brain, identifying 47 kinds, including some subtypes never seen before.

Historically, the best way to study a single cell was to look at it through a microscope. In cancer hospitals, that’s how pathologists decide if cells are cancerous or not: they stain them with dyes, some first introduced in the early 1900s, and consider their location and appearance. Current methods distinguish about 300 different types, says Richard Conroy, a research official at the National Institutes of Health.
Individual cells are captured and separated in bubbles of liquid, readying them for analysis.

The new technology works instead by cataloguing messenger RNA molecules inside a cell. These messages are the genetic material the nucleus sends out to make proteins. Linnarsson’s method attaches a unique molecular bar code to every RNA molecule in each cell. The result is a gene expression profile, amounting to a fingerprint of a cell that reflects its molecular activity rather than what it looks like.

“Previously, cells were defined by one or two markers,” says Linnarsson. “Now we can say what is the full complement of genes expressed in those cells.”

Although researchers determined how to accurately sequence RNA from a single cell a few years ago, it’s only more recently that clever innovations in chemistry and microfluidics have led to an explosion of data. A California company, Cellular Research, showed this year that it could sort cells into micro-wells and then measure the RNA of 3,000 separate cells at once, at the cost of few pennies a cell.

Scientists think the new single-cell methods could overturn previous research findings. That is because previous gene expression studies were based on tissue samples or blood specimens containing thousands, even millions, of cells. Studying such blended mixtures meant researchers were seeing averages, says Eric Lander, head of the Broad Institute.
“Single-cell genomics has come of age in an unbelievable way in just the last 18 months,” Lander told an audience at the National Institutes of Health this year. “And once you realize we are at the point of doing individual cells, how could you ever put up with a fruit smoothie? It is just nuts to be doing genomics on smoothies.”

Lander, one of the leaders of the Human Genome Project, says it may be time to turn pilot projects like those Regev is leading into a wider effort to create a definitive atlas—one cataloguing all human cell types by gene activity and tracking them from the embryo all the way to adulthood.

“It’s a little premature to declare a national or international project until there’s been more piloting, but I think it’s an idea that’s very much in the air,” Lander said in a phone interview. “I think [in two years] we’re going to be in the position where it would be crazy not to have this information. If we had a periodic table of the cells, we would be able to figure out, so to speak, the atomic composition of any given sample.”

Gene profiles might eventually be combined with other efforts to study single cells. Paul Allen, Microsoft’s cofounder, said last December he would be spending $100 million to create a new scientific institute, the Allen Institute for Cell Science. It will study stem cells and video their behavior under microscopes as they develop into various cell types, with the ultimate goal of creating a massive animated model. Rick Horwitz, who leads that effort, says that it will serve as a kind of Google Earth for exploring a cell’s life cycle.

The eventual payoff of collecting all this data, says Garry Nolan, an immunologist at Stanford University, won’t be just a catalogue of cell types, but a deeper understanding of how cells work together. “The single-cell approach is a way station that needs to be understood on the way to understanding the greater system,” he says. “In 50 years, we’ll probably be measuring every molecule in the cell dynamically.”


As Samsung’s Phone Empire Wanes, It Leans More Heavily On Chips

Samsung is still one of the major smartphone makers in the world today, but the growth of its handset business is slowing and margins are shrinking.

To counter those headwinds, Samsung is leaning ever more heavily on its chip business, which remains a rare bright spot — and an increasingly profitable one — at the company.

Samsung’s latest initiative is an effort to get its chips into as many new devices as possible in the emerging Internet of Things industry — tech’s moniker for the growing list devices connected to the internet.

At a conference in San Francisco on Tuesday, Samsung president and chief strategy officer Young Sohn announced a new technology platform called Artik for the next generation of wearables, smart TVs, and drones. Artik consists of a processor chip that comes packaged with memory, sensors and various wireless chips.

The Atrik modules come in three different sizes to fit different types of devices and will cost between $10 and $100. The chips are designed to help device manufacturers speed up development of new Internet of Things gadgets.

Samsung’s smallest module is the Artik 1, which is about the size of a ladybug. It contains 250MGz dual core processor, 4MB of flash memory, Bluetooth radio and a nine-axis sensor. It’s designed for tiny, low-power devices like fitness trackers.

The larger Artik 5 has a 1Ghz dual core processor, a 514 DRAM, 4GB flash memory, a video processor as well as WiFi, Bluetooth and ZigBee, a technology that allows small devices to communicate. With video decoding and encoding technology, it would be ideal for something like an internet-connected camera.
The highest end module is the Artik 10, which is built around Samsung’s Exynos chip, the processor that powers the new Galaxy S6 smartphone. It has an eight core processor, a 2GB DRAM, 16GB flash memory, a high-definition video processor, and all the same radio chips as the Artik 5. With the addition of a modem, it could used for making phones, but Samsung is advertising it to be used for applications like home servers and industrial equipment.
The new chips are intended to power hardware made by other manufacturers as well as some of Samsung’s own devices. The company said it is planning to put the Artik chip into its refrigerators, TVs and ovens as part of Samsung’s plan it outlined at the 2015 Consumer Electronics Show to connect all of its products up to the internet by 2020.

Yoon Lee, vice president of smart home and digital appliances at Samsung, said that his division still hasn’t decided exactly how many of Samsung’s devices will use Artik. And even though Samsung chip business is under the same company as its appliance business, the division will evaluate other vendors’ chips. “We will adopt Artik if it’s good for consumers and good for business,” Lee said.

With these new chips, Samsung’s competition with rivals like Intel INTC +0.08% and Qualcomm QCOM +0.79% is certain to increase. Both companies have are also pushing into the Internet of Things. Intel’s Edison is a development system designed for connected gadgets like wearables. Qualcomm is expected to make its efforts known in this area with an announcement taking place this Thursday at an event in San Francisco.

Some analysts said Samsung will face challenges in persuading third party manufacturers, many of whom compete with the company, to adopt its chips. (Samsung makes a million products a day and ships 660 million devices a year across a wide swath of the consumer electronics market.)

“When you’ve got other parts of your company using the silicon, it’s hard to get anybody else to buy it,” said Jim McGregor of Tirias Research. “Why would a company use a Samsung chip if they’re competing with Samsung?”

In an effort to overcome those obstacles, Samsung is trumpeting the openess of its technology. On Tuesday the company also launched SmartThings Open Cloud, which is built by SmartThings, the cloud-based smart home system startup that Samsung acquired for a reported $200 million last August. The Artik chips will connect to Samsung’s new cloud service and allow outside cloud applications to connect with devices powered by Artik chips.

“This is a commitment to openness,” said SmartThings CEO and cofounder Alex Hawkinson in a discussion following the Samsung’s presentation at the conference. “There’s lots of efforts out there that claim they’re open, but really they’re walled gardens. We’re trying to combat that.”

Follow me on Twitter @aatilley or send me an email: atilley@forbes.com

Qualcomm: The Internet Of Things Is Already A Billion Dollar Business

The emerging world of every object connected to the internet — the “Internet of Things” — is getting plenty of attention, but it’s frustratingly opaque. For all the endless research reports and tech pundit hype machine, it’s hard to see much substance.

Qualcomm QCOM +0.79% provided a bit of guidance on Thursday about exactly how big this sector is for the San Diego chipmaker. At an event at San Francisco’s Masonic Center, Qualcomm said it made $1 billion in revenue last year on chips used in a variety of city infrastructure projects, home appliances, cars and wearables.

Qualcomm said there were 120 million smart home devices shipped with Qualcomm chips in them last year. In addition, there are 20 million cars equipped with its chips, and Qualcomm silicon is used in 20 types of wearable devices.Qualcomm provides everything from cellular modems and WiFi chips to application processors to Internet of Things devices.

The chipmaker’s makes the most money selling the wireless chips and processors for a large chunk of the smartphone market, but it expects that 10% of its revenue in its chip division will come from non smartphone devices in 2015.

Qualcomm is using much of the same technology it puts into phones in Internet of Things devices. It’s tailoring its Snapdragon processors for mobile phones, for example, for automobiles and smartwatches. “The investment we’ve made in the Snapdragon business is necessary to drive our mobile business,” Qualcomm president Derek Aberle said in an interview. “We’ve also made some investments in automotive-grade Snapdragon chips, but it’s not like we have to create entirely new processors and chips. We can leverage our previous investments and come through with higher margins.”
Aberle pointed out that when the company was founded back in 1985, its first product was developing connectivity for trucks. Revenue from that business was used to fund its research into cellular modems — the core piece of technology that Qualcomm is so well known for today.

Qualcomm is facing competition from the rest of the chipmaking world in its attempt to be a big player in this new market. On Tuesday, Samsung unveiled a new set of chips called Artik, which are also specifically intended for Internet of Things devices. Intel INTC +0.08% has an business division dedicated to the Internet of Things which brought in $2 billion in revenue for 2014, but Intel packages lots of software and cloud services into this overall revenue count. Qualcomm’s $1 billion figure for 2014 sticks strictly to chip sales.

Unlike the smartphone business, the Internet of Things market is not likely to be dominated by a handful of massive players like an Apple AAPL -0.15% or Samsung. Instead, it’s expected to consist of smaller players competing across a variety of segments be it connected shoes, watches, thermostats, lightbulbs or other devices.

On Thursday, Qualcomm also introduced new chips intended for Internet of Things devices: a WiFi chip with processing power built in that can connect up to the internet without the assistance of a separate processor; and another WiFi chip to power hub that can act as a central point for routing devices together. The latter chip would be ideal for a smart home hub that could connect multiple devices to each other.

Breaking ranks with its competitors and most everyone in technology, Qualcomm doesn’t like to talk about the Internet of Things. It prefers to call it the “Internet of Everything.”

“The term captures the concept well,” Raj Talluri, senior vice president of product management, said in an interview. “The problem is that it captures it in a way makes people think it’s one club of stuff. There’s a smartphone processor, but we can’t talk about an Internet of Everything processor. It needs to be further refined.”

Follow me on Twitter @aatilley or send me an email: atilley@forbes.com

Blog at WordPress.com.

Up ↑

%d bloggers like this: