Trends in Storage – Phase Change Memory (PCM)

What is Phase Change Memory?

Phase change memory (PCM) is an emerging non-volatile solid-state memory technology employing phase change materials.  It has been considered as possible replacement for both flash memory and DRAM but the technology still needs to mature before it can be put to production usage.

We may not realize it, but we are already using phase change materials to store data – they are used in re-writeable optical storage, such as CD-RW and DVD-RW discs.  For optical drives, bursts of energy from a laser put tiny regions of the material into amorphous or crystalline states to store data. The amorphous state reflects light less effectively than the crystalline state, allowing the data to be read back again.

Phase change materials, such as salt hydrates, are also capable of storing and releasing large amounts of energy when they move from a solid to a liquid state and back again.  Traditionally, they have been used in cooling systems and, more recently, in solar-thermal power stations, where they store heat during the day that can be released to generate power at night.

However, there are additional properties of PCM that are being researched that may allow for new and exicting use of these materials.

For memory devices it is not their thermal or optical properties that make PCMs so attractive. Instead it is their ability to switch from a disorderly (or amorphous) state to an orderly (or crystalline) one very quickly.  PCM memory chips rely on glass-like materials called chalcogenides, typically made of a mixture of germanium, antimony and tellurium.  In PCM the pronounced change in electrical resistivity when the material changes between its two stable states, namely the amorphous and poly-crystalline phases, is used.

Promise of PCM

With a combination of speed, endurance, non-volatility and density, PCM can enable a paradigm shift for enterprise IT and storage systems as soon as 2016.  The benefits of such a memory technology would allow computers and servers to boot instantaneously and would significantly enhance the overall performance of IT systems.  PCM can write and retrieve data many orders of magnitude faster than flash, enable higher storage capacities, and also not lose data when the power is turned off.

Phase change materials are also being considered for the practical realization of ‘brain-like’ computers where a PCM cell is used to act like a hardware neuron and to have a synaptic like functionality via the ‘memflector’, an optical analogue of the memristor.

How does Phase Change Memory work?

PCM memory chips consists of chalcogenide sandwiched between two electrodes. One of the electrodes is a resistor which heats up when current passes through it.  A gentle pulse of electrical energy causes the resistor to provide heat and thereby causes the chalcogenide to melt. As the material cools, it forms a crystalline structure. This state corresponds to the cell storing a “1”. When a short, stronger pulse is applied, the chalcogenide melts but does not form crystals as it cools.  It assumes a disorderly amorphous state corresponding to be “0”.  The amorphous state has higher electrical resistance than crystalline state.  Hence, PCM memory cells are also sometimes referred to as “memristors”.  This complete process is reversible and controlled by the application of currents.  Hence, the PCM cell can switch between “0” and “1” over and over again.

If the amount of current provided to PCM can be controlled, then chalcogenide enters an intermediate state which is a combination of amorphous and crystalline phases.  This is the principle of multilevel PCM which can store multiple bits of information in a single cell.

IBM researchers have built PCM memory chips with 16 states (or four bits) per cell, and David Wright, a data-storage researcher at the University of Exeter, in England, has built individual PCM memory cells with 512 states (or nine bits) per cell. But the larger the number of states, the more difficult it becomes to differentiate between them, and the higher the sensitivity of the equipment required to detect them, he says.

When was PCM discovered?

Although the concept of Phase Change Materials came along some 40 years ago, it was only in 2011, that scientists at IBM Research demonstrated that PCM can reliably store multiple data bits per cell over extended periods of time.

What is the performance of Phase Change Memory?

PCM exhibits highly desirable characteristics, such as rapid state transition, good data retention and performance, as well as future scaling to ultra-small device dimensions.  Writing to individual flash-memory cells involves erasing an entire region of neighbouring cells first.  This is not necessary with PCM memory, which makes it much faster.  Indeed, some prototype PCM memory devices can store and retrieve data 100 times faster than flash memory.

Another benefit of PCM memory is that it is extremely durable, capable of being written and rewritten at least 10m times.  Flash memory, by contrast, wears out after a few thousand rewrite cycles, because of the high voltages that are required to move electrons in and out of the floating-gate enclosure.  Accordingly, flash memory needs special controllers to keep track of which parts of the chip have become unreliable, so they can be avoided.  This increases the cost and complexity of flash, and slows it down.

PCM is also inherently fast because the phase-change materials can flip their phase very quickly, in the order of a few nanoseconds.  Recently it has been shown through simulation materials that these phase-change mechanisms can happen on the sub-nanosecond time scale as well.

In addition, PCM offers greater potential for future miniaturisation than flash.  As flash-memory cells get smaller and devices become denser, the number of electrons held in the floating gate decreases.  Because the number of electrons is finite, there will soon come a point at which this design cannot be shrunk any further.  PCM offers a radically different approach.  With PCM, the changes between the crystalline and amorphous states don’t involve the movement of electrons.  Therefore, by nature, phase change is less harmful to the material and it doesn’t deteriorate as easily over time as flash.

The IBM research team believe that the multi-level phase change memory technology could be ready for use by 2016.

How will PCM be used?

Replacing flash is not going to be easy though.  Flash technology has a huge customer base.  As of today, flash is the most advanced technology of all the solid-state technologies out there. However, Flash and PCM may play in different spaces.  PCM could serve as the main memory for enterprise class applications due to its very high endurance and better latency properties.  PCM could also complement DRAM in future products where instead of using a small DRAM, there could be a bigger pool with PCM and DRAM, with the DRAM serving as a cache for the PCM.

At the same time, some of the biggest memory manufacturers are already considering moving to PCM as a replacement for NOR flash (used in cell phones).  NOR flash stores source code.  Because NOR flash is reaching the end of its scaling pathway, this is one area where people think that PCM can enter the market.

The technology could benefit applications such as “big data” analytics and cloud computing.

Operating systems, file systems, databases and other software components need significant enhancements to enable PCM to live up to its potential.  Studies show that any piece of software that spends a lot of time trying to optimize disk performance is going to need significant reengineering in order to take full advantage of these new memory technologies.

Who is leading the work on Phase Change Memory?

Companies like Micron Technology, Samsung and SK Hynix—the three giants of digital storage—are already applying PCM inside memory chips.  The technology has worked well in the laboratory for some time and is now moving towards the mainstream consumer market.  Micron started selling its first PCM-based memory chips for mobile phones in July, offering 512-megabit and one-gigabit storage capacity.

IBM is now working with SK Hynix to bring multi-level PCM-based memory chips to market.  The aim is to create a form of memory capable of bridging the gap between flash, which is used for storage, and dynamic random-access memory, which computers use as short-term working memory, but which loses its contents when switched off.  PCM memory, which IBM hopes will be on sale by 2016, would be able to serve simultaneously as storage and working memory—a new category it calls “storage-class memory”.

Conclusion

PCM promises to be smaller and faster than flash, and will probably be storing your photos, music and messages within a few years.

PCM memory does not merely threaten to dethrone flash, in short, it could also lead to a radical shift in computer design—a phase change on a much larger scale.

References

  • The paper “Drift-tolerant Multilevel Phase-Change Memory” by N. Papandreou, H. Pozidis, T. Mittelholzer, G.F. Close, M. Breitwisch, C. Lam and E. Eleftheriou, was recently presented by Haris Pozidis at the 3rd IEEE International Memory Workshop in Monterey, CA.
  • The Economist: “Phase-change memory, Altered states”, Q3 2012
  • IBM Research, Zurich. “IBM scientists demonstrate computer memory breakthrough”
  • Search Solid State Storage. “UCSD lab studies future changes to non-volatile memory technologies”
  • Search Solid State Storage. “New memory technologies generate attention as successor to NAND flash”
  • Arithmetic and Biologically-Inspired Computing Using Phase-Change Materials by C. David Wright, Yanwei Liu, Krisztian I. Kohary, Mustafa M. Aziz, Robert J. Hicken

Spotlight – XtremIO

Introduction

On May 10, 2012, EMC announced that it acquired privately held XtremIO.  This article talks about XtremIO, the technology, the reasons behind the acquisition, and what it means for other big players.

About the Company

XtremIO is based in Herzliya, Israel (“The Start-Up Nation”). It was founded in 2009 and has raised $25 million in venture capital funding.  It provides an “All-flash” technology product built from the ground up using data reduction techniques such as inline deduplication to lower costs and save capacity.

It competes against other all-flash array makers such as Solid Fire, Texas Memory Systems (TMS), Violin Memory, Nimbus Data, Pure Storage and Whiptail.

Technology

XtremIO describes its own all-flash array as having a scale-out clustered design where additional capacity and performance can be added when needed.  It also has no single point of failure and supports real-time inline data deduplication.  All-Flash means that the XtremIO system supports high levels of I/O performance, particularly for random I/O workloads that are typical in virtualized environments, with consistently low (sub-millisecond) latency.  It also has integration to VMware through VAAI.

XtremIO won a 2012 Green Enterprise IT award from the Uptime Institute for IT Product Deployment.

Acquisition of XtremIO by EMC

Israel-based companies, for the most part, are not great at selling – what they are great at is engineering.  Companies like EMC and NetApp, have big sales channels and can pick up small Israeli start-ups for less money for their technology only.  The XtremeIO acquisition was reported to be valued at $430 million.

EMC and XtremIO also have natural ties in part because XtremIO co-founder Shuki Bruck sold his previous company Rainfinity to EMC.

Big competitors, including NetApp, HP, Dell, IBM, and Hitachi Data Systems may feel pressured to get in the game and look for such companies to acquire, reports Derrick Harris.  Indeed, NetApp was reportedly also trying to make a bid for XtremIO.

EMC Advantage

All-flash arrays are expensive, high-performance systems built for applications requiring high throughput, such as relational databases, big data analytics, large virtual desktop infrastructure or processes requiring large batch workloads like backups.

Flash arrays can deliver high performance using a relatively small amount of rack space, power and cooling.

The all-flash array of the type XtremIO offers will give EMC faster performance across both virtualized and big data environments, meaning it will also help EMC’s subsidiary VMWare, which focuses on virtualization. Combined with EMC’s server-side PCI flash product called Project Lightning, which keeps hot data in an SSD cache sitting alongside the processor, that’s one powerful hardware platform for tomorrow’s applications.

EMC needed new technology, and rather than develop it in house, it chose to buy that technology, and a strong flash storage development team. The other large storage vendors will probably make similar purchases to catch up.

Rather than combine Isilon and VNX somehow, EMC acquired XtremIO. XtremIO offers scale-out, great data management and great performance.  In fact, their subsystem was built specifically for flash, whereas flash was an afterthought for NetApp (they still leverage an HDD-optimized subsystem).

Industry Impact

It is clear Flash is going to become even more imperative for the big storage players and getting in first with XtremIO might pay off for EMC and become the deal of the year.

With pressure mounting on other big-players to catch-up with EMC, there are other similar companies like XtremIO that may be the next target for possible acquisitions. Fusion-IO, Violin Memory, Virident or Kaminario could be possible acquisition targets that other players might be looking at.

EMC Project X

At VMworld 2012, EMC showed an early version of the all-flash array based on XtremIO technology.  Project X, as the array is known for now, has been revealed to have dual Intel-based controllers in each X-brick scaling unit along with a shelf of flash drives, 2 host adaptors with 2 ports each (supporting FC and iSCSI), and Infiniband connecting the modules together in a scale-out manner.

The demo claimed 2600% dedupe rates.  The dedupe is global, inline, always on and is said to extend SSD lifespans by reducing the rate of writes to each drive.  The array delivers a predictable sub-millisecond I/O response time for every 4K block no matter what you happen to be doing: read, write, sequential, random, snaps, etc.  The formerly big number of a million IOPS can result from a very modest configuration of XtremIO modules.

The price of the new machines was not disclosed or even discussed, but a likely release date of somewhere in the first half of 2013 remains on EMC’s agenda.

References

  • The register, “EMC shows off XtremIO’s Project X box”
  • VentureBeat.com, “EMC’s buy of XtremIO for $400M could spur M&A rush in flash storage”
  • VentureBeat.com, “Flash storage mania — EMC buys XtremIO, eyes turn toward Violin”
  • Gigaom, “If EMC buys XtremIO, the flash war is on”
  • EMC, “VMware view solution guide”
  • Computer Weekly, “XtremIO: Costly mistake or genius deal for EMC?”
  • Chuck’s Blog, “When Flash Changed Storage: XtremIO Preview”

Spotlight – Akamai: Pioneer in CDN

Akamai was recently in the news for acquisitions of Blaze Software Inc. as well as Cotendo Inc. Akamai has done it again with yesterday’s acquisition of FastSoft Inc., a provider of content acceleration software.  The acquisition is expected to enhance Akamai’s cloud infrastructure solutions with technology for optimizing the throughput of video and other digital content across IP networks.

This article talks about Content Delivery Networks in general, Akamai, and its recent acquisitions.

Overview of Content Delivery Network (CDN)

Today the internet has about 77 TBps of global capacity.  As the internet grows bigger the number of Internet Exchange Points (IXP) across the world has increased from 50 in 2000 to over 350 in 2012. Today, when a person requests a video stream, or an internet download, the data is sent through a content delivery network, and it doesn’t need to travel as far as would be the case if the data was sent directly from the source server to the user.  As a result, the user gets better quality of service, and server load is also reduced as the data is cached over the content delivery network. Over 45 per cent of web traffic today is delivered over CDNs.

Conceptually, a delivery network is a virtual network built as a software layer over the actual internet, deployed on widely distributed hardware, and tailored to meet the specific system requirements of distributed applications and services.  A delivery network provides enhanced reliability, performance, scalability and security that is not achievable by directly utilizing the underlying Internet.  A CDN, in the traditional sense of delivering static Web content, is one type of delivery network.  Today CDN encompasses dynamic content as well.

Overview of Akamai

Akamai, launched in early 1999, is the pioneer in Content Delivery Networks.  The company evolved out of an MIT research effort to solve the flash crowd problem.  It provided CDN solutions to help businesses overcome content delivery hurdles.  Since then, both the Web and the Akamai platform have evolved tremendously.  In the early years, Akamai delivered only Web objects (images and documents).  It has since evolved to distribute dynamically generated pages and even applications to the network’s edge, providing customers with on-demand bandwidth and computing capacity.

Today, Akamai delivers 15-20% of all Web traffic worldwide and provides a broad range of commercial services beyond content delivery, including Web and IP application acceleration, EdgeComputing™, delivery of live and on-demand high-definition (HD) media, high-availability storage, analytics, and authoritative DNS services.  Comprising more than 61,000 servers located across nearly 1,000 networks in 70 countries worldwide, the Akamai platform delivers hundreds of billions of Internet interactions daily, helping thousands of enterprises boost the performance and reliability of their Internet applications.

Akamai Acquisitions

The following list shows some of the Akamai acquisitions over its history.

  • June 2005, Akamai acquired Speedera Networks valued at $130 million.
  • November 2006, Akamai acquired Nine Systems Corporation valued at $164 million.
  • March 2007, Akamai acquired Netli valued at $154 million.
  • April 2007, Akamai acquired Red Swoosh valued at $15 million.
  • November 2008, Akamai acquired aCerno valued at $90.8 million
  • June 2010, Akamai acquired Velocitude LLC valued at $12 million.
  • February, 2012, Akamai acquired Blaze Software Inc. a provider of front-end      optimization (FEO) technology.
  • March 2012, Akamai acquired Cotendo, valued at $268 million, that offers an      integrated suite of web and mobile acceleration services.
  • September 13, 2012, Akamai acquired FastSoft Inc., provider of content acceleration      software.

The latest acquisition of Akamai is FastSoft Inc. which was launched in 2006 to commercialize network optimization technology.  FastSoft’s patented FastTCP algorithms improve Transmission Control Protocol (TCP), adding intelligence designed to increase the speed of dynamic page views and file transfer downloads while reducing the effects of network latency and packet loss.  FastSoft’s unique technology has helped improve website and web application performance across the first and last miles, as well as through the cloud, without requiring client software or browser plug-ins.  Combining FastSoft with Akamai’s existing network protocols is expected to help enable Akamai to optimize server capacity, deliver higher throughput for video, and bring greater efficiency to its global platform.

If we focus on the 2012 acquisitions of Blaze, Cotendo and now FastSoft, they are indicative of a trend towards providing end-to-end acceleration for an entire leg of the transaction.  With the current proliferation of mobile devices and users accessing internet over mobile devices, Akamai is also targeting various performance improvements and network services to deliver content to these users with lower latency and better security than has previously been available.

Compuware’s Gomez platform is well-known technology to measure the performance of Web applications.  According to Gomez benchmarks, it takes a mobile Web site from 7.7 to 8 seconds to open, versus 2 seconds on a desktop computer, says Pedro Santos, VP of the Mobile Business at Akamai.

“So there is a tremendous opportunity to improve the performance of mobile web sites and applications,” he says, citing user surveys that 71% of consumers expect Web sites to open on a mobile phone as quickly as they do on a desktop computer, and that 77% of organizations today have mobile web pages that take longer than 5 seconds on average to open.

Akamai’s new products like Terra Alta and Aqua Mobile Accelerator also substantiate this trend.

It will be interesting to study the strategic response from Akamai’s competitors like Limelight networks.

References

  • Network computing, “Akamai Boosts Web, Mobile App Performance”
  • http://www.prnewswire.com, “Akamai Acquires FastSoft”
  • Gigaom, “The shape of Internet has changed. It now lives life on the edge”

Low Energy Bluetooth – Possibilities for connected point of care diagnostic devices

The information sharing or more explicitly the connectivity between devices/systems for data sharing is becoming a key aspect of personal healthcare or clinical health care solutions. While wired systems have been used, they definitely have user adaptability issues: imagine a patient carrying a bunch of wires for a personal health care monitoring system or a patient lying on the operation table surrounded by wires. Despite regulatory constraints, wireless technology is quickly proliferating as a preferred communication means for applications in the wider area (e.g., remote monitoring of patients) as well as short area (e.g., Patient monitors) needs.

Companies can opt for a proprietary RF wireless solution or a standard wireless solution like Wi-Fi, Bluetooth, Zigbee etc. A proprietary solution helps in better control and can cater to specific needs whereas the use of standard technology helps reduce development and testing effort and better manage regulatory expectations.

The low energy Bluetooth standard is designed as a low cost solution with focus on low power consumption, and it is targeted at applications for data collection from sensor based device networks. It works on the concept of small data transmission on an event (e.g.,  periodic capture of vital data to a central hub), which results in low power consumption compared to regular Bluetooth (with continuous data streaming).  The event based system wakeup makes it ideal for sensor based device networks. Its enhanced range is over 100 meters with connection setup and data transfer latency as low as 3ms. It also supports Full AES-128 encryption using CCM. The low energy Bluetooth uses the adaptive frequency hopping principle to minimize interference from other technologies like Wi-Fi in the 2.4 GHz band. The best part is that in the dual mode a device can work on both the classical Bluetooth and the low energy Bluetooth protocol depending on the master device where as in single mode it will work on the low energy Bluetooth protocol only.

The standard Bluetooth health device profile focuses on the patient monitoring and personal health care devices both in home and clinic environment. Personal health care is one of the best-identified business cases for low energy Bluetooth and with initiatives from Continua Health Alliance, there is already the required traction for integration between third party devices. However, there is a great opportunity (serial port profile or other standard profiles) to use this technology in other areas of device connectivity as well.  Some examples could be:

  • Low Energy Bluetooth enabled Patient bedside vital monitoring devices, which send vitals data to a central device. The central hub could further connect to a central server on LAN or WLAN.
  • Continuous diabetic monitoring devices, which send the data to a Smartphone application. The data could be uploaded from a Smartphone to a central server for data management or other usage.
  • After surgery or rehabilitation monitoring devices
  • Smart wireless diagnostic catheters with a smart node that sends the data to the central data collection device over low energy Bluetooth

References

http://www.bluetooth.com/Pages/low-energy.aspx

Challenging World of Software Product Line

Software Product Line (SPL) is not a new concept and has existed for the past several years. It has been used across many industries varying from avionics to medical, automotive to consumer electronics and telecom to storage. Industry big names like Boeing, Philips, Toshiba, Nokia, Ericson AXE, and GM etc. have applied this concept successfully. However, it has still not become The Thing in the industry.  Some of the challenges that inhibit this are:

  • Managing variability across products
  • Investment required in building the SPL based infrastructure
  • People resistance and the skills required

The variation across the product is easier to manage if enough time is invested in identifying the products under the product line and core asset features are defined from the domain prospective. The process should be driven from the level of domain, product and system but not from the software level. From a broader perspective, the SPL could be approached as Platform Product Line where each components of the system that includes hardware are covered.

The fundamental cost line items are to a large extent the same as that of standard software development. However, the economics of SPL suggests that as the number of products increases the SPL based product development becomes more cost effective than the standard development approach. The initial investment is large in SPL but over the time it will have more benefits and the initial investment could be managed by using the Hybrid approach of SPL. The hybrid approach is to develop a product and SPL core assets in parallel where upper life cycle efforts are spent more to manage the SPL expectations.

Resistance to a change is a human tendency and in case of SPL although basic activities are the same as a standard software development process, a different mindset and thought process is required. The sponsorship from senior leadership might be helpful in managing the resistance to change but small steps towards the bigger objective for SPL would be more helpful. The small steps could be to start following the SDLC activities (in spirit) or if they are already followed then move toward Model Driven Development. In cases where the whole process is designed to be a skill enhancement for the individual,  the change process is smoother.

In summary, there are advantages of the SPL approach like reduced time to market, long term cost, consistent better quality, and market adaptability. However, successful SPL requires time to deploy, a flexible process framework which evolves with experience and a thoughtful approach, which varies from industries, products, domains and organizations.

 References

  1. http://www.sei.cmu.edu/productlines/
  2. http://www.splc.net

Green buses in India – Are we asking for too much?

There was a recent news release from Ashok Leyland, the flagship company of the Hinduja Group, about plans to launch hybrid Optare buses in India.  While we know of some bold transit authorities in the U.S. and Europe running buses on hybrid power train, the news still appeared as more of marketing gesture rather than a serious business announcement.  Given the credibility of the company, however, we were intrigued to look at some ground facts on the possibility of serious application of hybrid technology in our mass transportation systems. Considering India offers a meagre 1.29 buses per thousand passengers, while other well-planned countries provide vastly more — Brazil has 10.3 buses per thousand — the opportunity of making an impact is tremendous.  Is Hybrid the solution?

The hybrid transit bus evolution got a big marketing boost at the end of last year with the iconic London Redbus being prototyped on a hybrid power train (with the underlying design from Volvo).  It is a different matter that there was a kind of an anti-climax, though, because the bus had to pull over as the system was not designed for long-haul use.  In another development, China got its first indigenously developed solar powered hybrid bus – which claims to prolong the Lithium battery life by 35 percent.  In India too, a lot of hype got created with the launch of the Tata Starbus.  While the future has to be green, the economics of these buses and the peripheral systems have to allow a hybrid bus service to work from a cost perspective.  (there is no point running a few buses like we may end up doing…just doesn’t make economic sense..) Costs may become a huge barrier for its adoption beyond some pet ministerial projects.

The hybrid buses (and most of them are based on very costly technology from a few players) are at least 30% more expensive than the best buses running on Indian roads. Then there is also a choice between the type of drive train, a series-hybrid drive train and a parallel hybrid. Series hybrids are recognized as being more suitable for start-stop applications and allow flexible packaging in comparison to a parallel system where the mechanical drive shaft has to be 1.5-2m away from the rear axle. Series hybrids are very sensitive to failure, however, and if any of the electrical components fail it comes to a complete standstill – unfortunately many of the new components are susceptible to this. Then there are the batteries, they are expensive, hazardous, very bulky, add significant maintenance cost and some need to be changed as often as every four years.

In the context of adaptation of hybrid buses for Indian public transport, I believe that there are few things that need to happen before it can start making sense. Better batteries, lower initial cost, a better ecosystem and more importantly economies of scale. India can also choose to wait till the next wave of evolution in hybrids.  For example, a new technology for charging electric buses has already been introduced in Europe called Opportunity Charging.  This economical technology helps in the contactless charging of electric buses — where the driver needn’t leave the bus for recharging. We will follow the evolution closely and hopefully play a part.

Rehabilitation through Brain Machine Interfaces

Stephen Hawking: Former Lucasian Professor of Mathematics at the University of Cambridge, world renowned theoretical physicist, diagnosed with Amyotrophic Lateral Sclerosis (ALS) at the age of 21

Christopher Reeves: American film actor, fondly remembered for his motion-picture portrayal of the fictional superhero Superman; suffered a spinal cord injury and became a quadriplegic at the age of 43

These are probably some of the first names which pop up when we think of people living with disability and the need for rehabilitation technologies. A severe form of disability arises because of limb amputations as an after effect of traumatic accidents, degenerative diseases or victims of a social disorder.

These conditions often mean drastic lifestyle changes for the disabled, worsened with limited means of livelihood, social disconnect and dependability. Restoring mobility for such patients is a goal of many research teams around the globe, most focusing on repairing the damaged nerves and trying to find ways for nerve signals to bypass the injury site. Another approach is to build prosthetic devices which are essentially closed-loop architectures based on biofeedback (EMG, EEG etc) signals.

Brain Machine Interface (BMI) is a recent technological advancement which taps the brain waves using surface EEG electrodes which are subsequently used to control a prosthetic device. The guiding premise in the design of such interfaces is the following:

The two activities of actually moving a real arm and just thinking about moving the arm produce same neuronal firing in the brain.

A smart wheelchair based on this premise could be a system having an EEG cap to non-invasively acquire the signals from the patient’s primary motor cortex area in the brain, and deploy a real-time pattern recognition algorithm to identify the mental states of the patient – whether he is thinking ‘left’, or ‘right’ or is idle, and subsequently control a wheelchair based on these inputs. The EEG cap however, as opposed to implanted neurochips has a limited bandwidth but at the same time is free from complications because of surgical procedures.

In an attempt to develop prosthetic technology capable of restoring motor control and tactile feedback to spinal cord injury patients, an interesting experimental study was reported in Nature where researchers from Duke University Center for Neuroengineering had successfully trained two monkeys to use the electrical activity in their motor cortex to control the arm of an onscreen avatar without physically moving themselves, hinting at the possibilities of wearable thought – controlled prosthetic devices in the near future.

A recent case report from Cornell University represents the first successful demonstration of a BCI-controlled lower extremity prosthesis for independent ambulation, which might allow for a cheap, easy, and non-invasive option to getting paraplegics walking again. A wireless brain-machine interface developed by Neural Signals uses implantable electrodes to address Locked-In syndrome for jaw movements, a terrifying brain lesion which leaves patients aware but almost entirely without the power to move.

However fascinating these new developments seem to be, they are far from a commercial reality. The non invasive systems suffer from poor spatial and temporal resolutions, driving the quest to devise more powerful and usable brain recording devices. The medical community would benefit for sure, but possibilities are limitless – gaming, cursor control, brain timing and brain-to-brain communication to name a few. And who knows, it might also open an unethical dimension of hacking the brain!