Agiliad wins design contest at the 26th International Conference on VLSI Design and Embedded Systems!

Team Agiliad had participated in the design contest at the 26th International Conference on VLSI Design and 12th International Conference on Embedded Systems 2013, recently held in Pune. We had presented a novel solution in the design contest for a cost-effective and reliable estimation of fetus gestational age in the resource poor settings. The solution involved the measurement of symphysis-fundus height of a pregnant woman using an image processing based application built on Raspberry Pi, a $ 25 open-source computing platform, augmented with a mechanical frame for referencing the region of interest. The concept was highly appreciated at the conference and we also emerged as winners of the design contest. Following is a brief overview of the problem that we had identified along with the proposed solution. An illustrated presentation of the concept can be downloaded here!

Problem Addressed: Reliable estimation of fetus gestational age in resource poor settings

Estimation of gestational age of the fetus is an important clinical practice, crucial for monitoring the health of the mother as well as the fetus. The conventional method for this estimation is by ultrasonography. However, due to the lack of high end infrastructure in resource poor settings, this method is not practical. Another method for this estimation is the measurement of the symphysis-fundus height (SFH) using a measuring tape (shown in the figure below). This method has been approved to be suitable for rural settings. But because of the lack of proper training and documentation methods amongst the health workers at the primary level, process variations and errors in measurement are widely prevalent, leading to highly unreliable estimations. SFH measurement also facilitates in the early screening of macrosomia (excessive fetal weight), fetal growth retardation and multiple pregnancies. Hence, there is a definite need for a cost-effective technical solution to overcome the shortcomings of the manual measurement method and to help in multi-stage documentation of the procedure over the entire period of nine months.

Fundal Height Measurement

Solution Proposed: Measurement of symphysis-fundus height using Raspberry Pi image processing platform

The key drivers of the technical solution to address this problem are the following – low-cost, ease-of-use, and accuracy. The solution was derived from one of our in-house initiatives to leverage low cost open-source hardware to build a generic computing platform for diverse applications, ranging from building automation to point of care medical diagnostic devices. The present solution comprises a mechanical frame attached to the patient bed. The frame consists of three markers which are attached on top of three telescopic pillars facilitating in positioning the markers on the vertical plane. A web camera is also affixed onto the frame at certain distance from the patient bed. One of the markers is used to reference the camera from the patient by co-relating the diameter of the circular marker obtained on the image versus the actual known diameter. The other two markers are positioned at suitable points on the fundus across which the length of the curve has to be calculated. A schematic diagram of the experimental setup is shown below. A customized image processing algorithm is implemented on a Raspberry Pi computing platform consisting of the following key steps – marker detection, edge image conversion, boundary tracing and distance calculation. The Raspberry Pi is $ 25 open-source hardware based on an ARM 1176JZFS running at 700 Mz, with a Videocore 4 GPU (Bluray quality playback) in a Broadcom BCM 2835 SoC having 256 Mb RAM, 2 USB ports and an Ethernet port. The Raspberry Pi uses Linux kernel-based operating systems. The design and the algorithm were tested on a dummy model of the fundus and accurate length measurements were obtained. The next steps in this project consist of the following – testing the solution in a real clinical setting, building a mobile application on similar concept, and estimating the amniotic fluid index in a pregnant woman using the depth sensor of Microsoft Kinect.

Fundal Height

We wish to scout for more such elementary problems and come up with effective and innovative solutions to tackle them!

Advertisement

Focus: Hadoop (Part 1)

A google trend graph created on “Hadoop” and related technologies shows an interesting scenario. The interest over time related to web searches for Hadoop has steadily increased and continues to increase over time. It seems as if “Hadoop” and “Big Data” are replacing “Data mining” as keywords. Hadoop has aided in Big-data analytics that is a buzz-word everywhere these days. What was “Big” a few years back seems very “small” now. “Big” keeps becoming “Bigger”. Hadoop enables us to bridge the gap.

 

 

Image

 

This brief article (and Part 1 in the series) talks about Hadoop at an overview level, it’s history, the technology and future trends.

Hadoop is not new, the underlying technology is used by Google for web indexing, is used by organizations world-wide for Big-data analytics. It is in fact even used by Mars “rover” mission to aid in determining if life ever existed on Mars. It’s the sheer volume of data that needs to be handled where Hadoop shines through it cluster based distributed system.

In finance, if you want to do accurate portfolio evaluation and risk analysis, you can build sophisticated models that are difficult to put into a database engine. But Hadoop can handle it. In online retail, if you want to deliver better search answers to your customers so they’re more likely to buy the thing you show them, that sort of problem is also well addressed by Hadoop.

Hadoop is an open source project from Apache that has evolved rapidly into a major technology movement. It has emerged as the best way to handle massive amounts of data, including not only structured data but also complex, unstructured data as well.

Hadoop was created by Doug Cutting, the creator of Apache Lucene, the widely used text search library. Hadoop has its origins in Apache Nutch, an open source web search engine, itself a part of the Lucene project. The name Hadoop is not an acronym; it’s a made-up name. The project’s creator, Doug Cutting, explains how the name came about:

Image

“The name my kid gave a stuffed yellow elephant. Short, relatively easy to spell and pronounce, meaningless, and not used elsewhere: those are my naming criteria. Kids are good at generating such. Googol is a kid’s term.”

The underlying technology was invented by Google back in their earlier days so they could usefully index all the rich textural and structural information they were collecting, and then present meaningful and actionable results to users. There was nothing on the market that would let them do that, so they built their own platform. Google’s innovations were incorporated into Nutch, an open source project, and Hadoop was later spun-off from that. Yahoo has played a key role developing Hadoop for enterprise applications.

Simply put, Hadoop provides: a reliable shared storage and analysis system. The storage is provided by Hadoop Distributed File System (HDFS) and analysis by MapReduce algorithm. These are the main kernel components of Hadoop. However, Hadoop also has several other components like:

  • Hive (queries and data summarization)
  • Pig (processing large data sets)
  • HBase (column oriented NoSQL data storage system)
  • ZooKeeper (co-ordinating processes)
  • Ambari (administration)
  • HCatalog (meta data management service)

HDFS is a filesystem designed for storing very large files reliably with streaming data access patterns, running on clusters of commodity hardware. As the name implies, HDFS is a distributed filesystem, and hence has all the complications of network based filesystems like consistency, node failures, etc. However, by distributing storage and computation across many servers, the resource can grow with demand while remaining economical at every size. It’s designed to run on clusters of commodity hardware.

MapReduce is a framework for processing “embarassing parallel” problems across huge datasets using large number of computers. It uses locality of data effectively to reduce transmission of data between nodes. As the name implies, it consists of two steps: Map and Reduce. “Map” divides the problem into subproblems and distributes it across cluster of nodes, while “Reduce” collects the answers from all the nodes in the cluster and merges the results. MapReduce is not specific to Hadoop and it has been applied in different schemes for other solutions. For example, at Google, MapReduce algorithm was used to completely regerenate Google’s index of the World Wide Web.

The premise of MapReduce is that the entire dataset—or at least a good portion of it—is processed for each query. But this is its power. MapReduce is a batch query processor, and the ability to run an ad hoc query against your whole dataset and get the results in a reasonable time is transformative. It changes the way you think about data and unlocks data that was previously archived on tape or disk. It gives people the opportunity to innovate with data. Questions that took too long to get answered before can now be answered, which in turn leads to new questions and new insights. This enables solutions like big data analysis.

Hadoop is designed to run on a large number of machines that don’t share any memory or disks. That means you can buy a whole bunch of commodity servers, slap them in a rack, and run the Hadoop software on each one. When you want to load all of your organization’s data into Hadoop, what the software does is break that data into pieces that it then spreads across your different servers. There’s no one place where you go to talk to all of your data; Hadoop keeps track of where the data resides. And because there are multiple copy stores, data stored on a server that goes offline or dies can be automatically replicated from a known good copy.

Despite all the advantages provided by Hadoop, there are use case scenarios where Hadoop does not serve well. Such use cases include scenarios where we have:

  • Low-latency access
  • Lots of small files
  • Multiple writers, arbitrary file modifications

Quantcast recently announced Open-Sourcing of their Quantcast File System (QFS) that claims to provide better through-put than HDFS. It will be interesting to study how the two compare in performance tests. But, Quantcast isn’t the only company that has replaced HDFS. MapR‘s commercial distribution of Hadoop uses a proprietary file system. DataStax Enterprise uses Apache Cassandra to replace HDFS.

Over next parts in this series, we shall talk about Hadoop components in more detail.

 

References:

Hadoop: What it is, how it works, and what it can do http://strata.oreilly.com/2011/01/what-is-hadoop.html

What is Apache Hadoop? http://hortonworks.com/what-is-apache-hadoop/

Trends in Big Connectivity: Big Data, Hadoop and Life on Mars http://blogs.datadirect.com/2012/08/trends-in-big-connectivity-big-data-hadoop-and-life-on-mars.html

Quantcast Open Sources Hadoop Distributed File System Alternative http://techcrunch.com/2012/09/27/quantcast-open-sources-hadoop-distributed-file-system-alternative/

Hardware mobile apps – making smart phones ‘medically’ smarter!

In the era of smart phones and mobile gadgets becoming smarter day by day, it would not require a lot of effort to intuitively assess their ‘smartness’ for innovative medical applications. Mobile apps for conventional medical alerts, reminders, health parameters monitoring (blood sugar, blood pressure, BMI etc) have been in widespread use since a long time. Voxiva, a Washington D.C. based company provides mobile health-coaching programs which target a wide variety of users, including pregnant women, diabetics, and smokers. SpiroSmart is a recent innovative iPhone app which enables the measurement and analysis of conventional lung function parameters. However, applications based on mobile devices has reached an altogether new dimension with the rapid development of innovative ‘mobile hardware apps’ for diverse medical use. These pieces of hardware are used in conjunction with a conventional smart phone as potential medical diagnostic devices. Let us take a closer look at some of the most interesting (and technologically stimulating!) hardware mobile apps –

Netra

Netra is a solution proposed by the Camera Culture Group at MIT. It is an inexpensive mobile hardware app which is based on an inverse Shack-Hartman sensor for the estimation of refractive errors in the human eye. The key idea is to interface a lenticular view-dependent display with the human eye at close range just a few millimeters apart.

Image Source: Camera Culture Group, MIT Media Labs

OScan

The OScan team at Stanford University has developed an affordable screening tool that brings standardized, multi-modal imaging of the oral cavity into the hands of rural health workers around the world, allowing individuals to conduct screenings for oral lesions. This inexpensive device mounts on a conventional camera phone and allows for data to be instantly transmitted to dentists and oral surgeons. OScan aims to empower minimally-skilled health workers to connect early stage patients to health care providers and teach communities about the importance of oral hygiene.

MobiUS

Mobisante, a Redmond based company has developed a mobile ultra sound system (MobiUS) which includes a Toshiba Windows Mobile-powered smart phone, ultrasound probe, and the accompanying Mobisante software. The exams include “Quick Scan”, a general purpose setting, AAA, FAST, Cardiac, OB, Pelvis, Vascular and small organs.

e-Petri Dish

With the ePetri Dish system, scientists no longer have to remove the cells from the incubator but can simply look at the laptop images. Less manipulation makes for better cell health and reduced risk of contaminating them. With the ePetri system, cells are grown on a CMOS image sensor – the kind found in common digital cameras. A smartphone placed above the sensor provides – via a commercially available app – a scanning spot of light that sweeps back and forth across its LED screen.

Diabeto

It is a non-intrusive Bluetooth enabled device that connects to a glucometer and transmits data to a mobile phone. The Diabeto device can transmit to any diabetes mobile application. The Diabeto app will also have multiple utilities that can check your blood sugar levels, give history, suggest diet, notify the physician etc.

Endoscope

The RVA Smart-clamp is a universal endoscope adapter which enables pictures and video to be taken with a mobile phone camera. The app is unique in the sense that it is a purely mechanical device which helps the surgeon in the real time viewing of endoscopic images with great ease.

SmartHeart

SmartHeart is a gadget that turns a mobile phone into a powerful medical tool able to detect heart problems. It connects to, and converts, a smartphone into a hospital-grade heart monitor capable of performing electrocardiograms in just 30 seconds. The device hooks around the user’s chest and records their heart rate by measuring its electro-activity.

Image source: SHL Telemedicine

CellScope

CellScope‘s clip-on otoscope helps pediatricians increase the standard of care by creating a visual history of the middle ear and allows parents to save time by allowing ear infections to be diagnosed and treated remotely. Also, CellScope’s innovative clip-on dermascope enables patients to capture and transmit high-magnification, diagnostic-quality images of the skin from the privacy and convenience of their own homes.

Optofluidics

Flow cytometry is a technique for counting and examining cells, bacteria and other microscopic particles. Researchers at the BioPhotonics Laboratory at the UCLA Henry Samueli School of Engineering and Applied Science have developed a compact, lightweight and cost-effective optofluidic platform that integrates imaging cytometry and florescent microscopy and can be attached to a cell phone. The resulting device can be used to rapidly image bodily fluids for cell counts or cell analysis.
Image source: http://pubs.acs.org/doi/abs/10.1021/ac201587a

Adoption of Multi Core processors for industrial applications – Opportunities and Challenges

While the semiconductor industry has not been able to keep pace with the Moore’s law since 2006, the increase in chip frequencies has brought in new challenges in terms of power consumption. This has led to the evolution of Multi core processor technology (MCPs), which has already made a significant mark in the desktop computer market with all major semiconductor companies producing processors with 2, 4 and even up to 16 processing cores.

Multi-core processor technology has opened up new avenues in other areas as well and one domain area that has started adopting the technology significantly is the Industrial Automation and Robotics area. With a parallel evolution on the Operating System and Application software side for industrial applications, various control devices like PLCs, Micro-controllers and Human Interface Devices can be combined to run on single board platform based solution, which was something difficult to do with single core architectures. With varied software configurations that are possible, MCP architecture can give users a great deal of choice and flexibility like e.g. one of the cores can be dedicated to a complex process or critical functionality like a safety module or a redundancy module while the other core is available for non-critical operations.

Though in theory multiple cores would enhance the overall computing performance of the platform, realizing the potential of multi-core processing poses a significant challenge to software designers. In order to realize the benefits of MCPs the programmers must strive for absolute parallelism and at the same time not compromising on the real time determinism of the applications.

There are two software configurations that are possible with MCPs, Symmetric Multi Processing and Asymmetric Multi Processing. With a single operating system managing all the cores and scheduling the tasks between cores, SMP can assure users absolute parallelism provided the application is split into multiple threads. This objective brings to the fore the issue of redesign all existing applications to use thread affinity and multi threading constructs. The programmers have to be trained towards this perspective of concentrating on parallelism, which they are not used to in single core architectures. Also, while SMP architecture provides enhanced performance if the parallelism is exploited adequately, it may have a potential to adversely impact the Real Time Determinism, which can be crucial in Real Time Systems.

In Asymmetric Multi Processing platforms, multiple operating systems run simultaneously in the system, one for each core. The hardware peripherals are distributed between Operating systems. Since each OS manages only one core there is hardly any need to redesign applications allowing ease in portability from single core to multi-core platforms. AMP also ensures real time determinism being design equivalent to a single core architecture having only one core to schedule tasks. However, AMP does have limitations in terms of the parallelism that it can exploit on a multi-core setup. This is inherently due to the fact that the Operation system running on one core may not know if other cores are idle and cannot schedule tasks for other cores.

With each configuration having its benefits and limitations, choice of the configurations entirely depends on the nature of the application. With MCPs having more than two cores a hybrid configuration is also possible where both SMP and AMP co-exist, like in a quad core a single core can be configured to AMP to run critical task and ensure Real time determinism and other three cores are configured to run in the SMP mode.

The automation industry is slowly adapting MCPs with higher-end controllers first, followed by lower end controllers as costs come down. In the lines of software evolution there is also need for evolution of associated tools like compilers and debuggers to enable best use of MCP platforms. While there are debuggers that can debug and visualize the multi threading in true sense with interaction between threads and compilers that can map application code to specific core, reducing the efforts of the programmers, there is still a lot more to do as far as leveraging MCPs for critical industrial automation platforms.

References and Recent Updates:

Lights, Sound and Magnetism – the science behind next-generation medical technologies!

It was often hard to imagine the far-fetched applications of basic physics when topics as humble as Acoustics, Optics and Magnetism were introduced in our high school physics textbooks. And it seems enthralling now to fathom how some of these basic disciplines have been applied for the development of some of the most sophisticated medical technologies of today’s world. Out of this fascination we decided to have a look at some of them briefly –

Optical Coherence Tomography

Optical coherence tomography (OCT) is an emerging technology for performing high-resolution cross-sectional imaging. OCT is analogous to ultrasound imaging, except that it uses light instead of sound. OCT can provide cross-sectional images of tissue structure on the micron scale in situ and in real time. OCT can function as a type of optical biopsy and is a powerful imaging technology for medical diagnostics because unlike conventional histopathology which requires removal of a tissue specimen and processing for microscopic examination, OCT can provide images of tissue in situ and in real time. By using the time-delay information contained in the light waves which have been reflected from different depths inside a sample, an OCT system reconstructs a depth-profile of the sample structure. Three-dimensional images can then be created by scanning the light beam laterally across the sample surface. Lateral resolution is determined by the spot size of the light beam whereas the depth (or axial) resolution depends primarily on the optical bandwidth of the light source. For this reason, OCT systems may combine high axial resolutions with large depths of field, so their primary applications include in-vivo imaging through thick sections of biological systems, particularly in the human body. The figure below shows a comparison of OCT resolution and imaging depths to those of alternative techniques; the “pendulum” length represents imaging depth, and the “sphere” size represents resolution (image source – UWA).

Ultrasound Elastography

Elastography is based on the principle of physical elasticity which consists of applying a pressure on the examined medium and estimating the induced strain distribution by tracking the tissue motion.  It uses the visualization of the propagation of mechanical waves through the tissue to derive either a shear wave velocity or a Young’s modulus as a measure of tissues stiffness.  In practical terms, RF ultrasonic data before and after the applied compression are acquired and speckle tracking techniques, e.g., cross correlation methods, are employed in order to calculate the resulting strain. The resulting strain image is called an elastogram. The primary goal of elastography was the identification and characterization of breast lesions. To acquire an elastography image, the ultrasound technician takes a regular ultrasound image and then pushes on the tissue with the ultrasound transducer to take a compression image.  Normal tissue and benign tumors are typically elastic or soft and compress easily whereas malignant tumors do not depress at all. The image below shows a traditional ultrasound image and a corresponding real-time elastogram of an ablated lesion in an ex vivo liver. In the elastogram, blue corresponds to hard tissue and red corresponds to soft tissue. The lesion is not clearly visible in the traditional ultrasound image because the ablation process does not change the echogenicity of the tissue significantly. However, the lesion is clearly visible in the elastogram (dark blue area) because the ablation process hardens the tissue significantly. Image Source-TAMUS.

Magnetoencephalography

Magnetoencephalography (MEG) is a non-invasive technique used to measure magnetic fields generated by small intracellular electrical currents in neurons of the brain. It allows the measurement of ongoing brain activity on a millisecond-by-millisecond basis, and it shows where in the brain activity is produced. MEG measurements are conducted externally, using an extremely sensitive device called a superconducting quantum interference device (SQUID). The SQUID is a very low noise detector of magnetic fields, which converts the magnetic flux threading using a pickup coil into voltage allowing detection of weak neuromagnetic signals. Since the SQUID relies on physical phenomena found in superconductors it requires cryogenic temperatures for operation. Due to low impedance at this temperature, the SQUID device can detect and amplify magnetic fields generated by neurons a few centimeters away from the sensors. A magnetically shielded room houses the equipment, and mitigates interference. Applications of MEG include localizing regions affected by pathology before surgical removal, determining the function of various parts of the brain, and neurofeedback.

Watch out this space as we deep dive into some of these technologies in greater detail and explore the rapidly evolving medical technology landscape!

Low Energy Bluetooth – Possibilities for connected point of care diagnostic devices

The information sharing or more explicitly the connectivity between devices/systems for data sharing is becoming a key aspect of personal healthcare or clinical health care solutions. While wired systems have been used, they definitely have user adaptability issues: imagine a patient carrying a bunch of wires for a personal health care monitoring system or a patient lying on the operation table surrounded by wires. Despite regulatory constraints, wireless technology is quickly proliferating as a preferred communication means for applications in the wider area (e.g., remote monitoring of patients) as well as short area (e.g., Patient monitors) needs.

Companies can opt for a proprietary RF wireless solution or a standard wireless solution like Wi-Fi, Bluetooth, Zigbee etc. A proprietary solution helps in better control and can cater to specific needs whereas the use of standard technology helps reduce development and testing effort and better manage regulatory expectations.

The low energy Bluetooth standard is designed as a low cost solution with focus on low power consumption, and it is targeted at applications for data collection from sensor based device networks. It works on the concept of small data transmission on an event (e.g.,  periodic capture of vital data to a central hub), which results in low power consumption compared to regular Bluetooth (with continuous data streaming).  The event based system wakeup makes it ideal for sensor based device networks. Its enhanced range is over 100 meters with connection setup and data transfer latency as low as 3ms. It also supports Full AES-128 encryption using CCM. The low energy Bluetooth uses the adaptive frequency hopping principle to minimize interference from other technologies like Wi-Fi in the 2.4 GHz band. The best part is that in the dual mode a device can work on both the classical Bluetooth and the low energy Bluetooth protocol depending on the master device where as in single mode it will work on the low energy Bluetooth protocol only.

The standard Bluetooth health device profile focuses on the patient monitoring and personal health care devices both in home and clinic environment. Personal health care is one of the best-identified business cases for low energy Bluetooth and with initiatives from Continua Health Alliance, there is already the required traction for integration between third party devices. However, there is a great opportunity (serial port profile or other standard profiles) to use this technology in other areas of device connectivity as well.  Some examples could be:

  • Low Energy Bluetooth enabled Patient bedside vital monitoring devices, which send vitals data to a central device. The central hub could further connect to a central server on LAN or WLAN.
  • Continuous diabetic monitoring devices, which send the data to a Smartphone application. The data could be uploaded from a Smartphone to a central server for data management or other usage.
  • After surgery or rehabilitation monitoring devices
  • Smart wireless diagnostic catheters with a smart node that sends the data to the central data collection device over low energy Bluetooth

References

http://www.bluetooth.com/Pages/low-energy.aspx

Challenging World of Software Product Line

Software Product Line (SPL) is not a new concept and has existed for the past several years. It has been used across many industries varying from avionics to medical, automotive to consumer electronics and telecom to storage. Industry big names like Boeing, Philips, Toshiba, Nokia, Ericson AXE, and GM etc. have applied this concept successfully. However, it has still not become The Thing in the industry.  Some of the challenges that inhibit this are:

  • Managing variability across products
  • Investment required in building the SPL based infrastructure
  • People resistance and the skills required

The variation across the product is easier to manage if enough time is invested in identifying the products under the product line and core asset features are defined from the domain prospective. The process should be driven from the level of domain, product and system but not from the software level. From a broader perspective, the SPL could be approached as Platform Product Line where each components of the system that includes hardware are covered.

The fundamental cost line items are to a large extent the same as that of standard software development. However, the economics of SPL suggests that as the number of products increases the SPL based product development becomes more cost effective than the standard development approach. The initial investment is large in SPL but over the time it will have more benefits and the initial investment could be managed by using the Hybrid approach of SPL. The hybrid approach is to develop a product and SPL core assets in parallel where upper life cycle efforts are spent more to manage the SPL expectations.

Resistance to a change is a human tendency and in case of SPL although basic activities are the same as a standard software development process, a different mindset and thought process is required. The sponsorship from senior leadership might be helpful in managing the resistance to change but small steps towards the bigger objective for SPL would be more helpful. The small steps could be to start following the SDLC activities (in spirit) or if they are already followed then move toward Model Driven Development. In cases where the whole process is designed to be a skill enhancement for the individual,  the change process is smoother.

In summary, there are advantages of the SPL approach like reduced time to market, long term cost, consistent better quality, and market adaptability. However, successful SPL requires time to deploy, a flexible process framework which evolves with experience and a thoughtful approach, which varies from industries, products, domains and organizations.

 References

  1. http://www.sei.cmu.edu/productlines/
  2. http://www.splc.net

Rehabilitation through Brain Machine Interfaces

Stephen Hawking: Former Lucasian Professor of Mathematics at the University of Cambridge, world renowned theoretical physicist, diagnosed with Amyotrophic Lateral Sclerosis (ALS) at the age of 21

Christopher Reeves: American film actor, fondly remembered for his motion-picture portrayal of the fictional superhero Superman; suffered a spinal cord injury and became a quadriplegic at the age of 43

These are probably some of the first names which pop up when we think of people living with disability and the need for rehabilitation technologies. A severe form of disability arises because of limb amputations as an after effect of traumatic accidents, degenerative diseases or victims of a social disorder.

These conditions often mean drastic lifestyle changes for the disabled, worsened with limited means of livelihood, social disconnect and dependability. Restoring mobility for such patients is a goal of many research teams around the globe, most focusing on repairing the damaged nerves and trying to find ways for nerve signals to bypass the injury site. Another approach is to build prosthetic devices which are essentially closed-loop architectures based on biofeedback (EMG, EEG etc) signals.

Brain Machine Interface (BMI) is a recent technological advancement which taps the brain waves using surface EEG electrodes which are subsequently used to control a prosthetic device. The guiding premise in the design of such interfaces is the following:

The two activities of actually moving a real arm and just thinking about moving the arm produce same neuronal firing in the brain.

A smart wheelchair based on this premise could be a system having an EEG cap to non-invasively acquire the signals from the patient’s primary motor cortex area in the brain, and deploy a real-time pattern recognition algorithm to identify the mental states of the patient – whether he is thinking ‘left’, or ‘right’ or is idle, and subsequently control a wheelchair based on these inputs. The EEG cap however, as opposed to implanted neurochips has a limited bandwidth but at the same time is free from complications because of surgical procedures.

In an attempt to develop prosthetic technology capable of restoring motor control and tactile feedback to spinal cord injury patients, an interesting experimental study was reported in Nature where researchers from Duke University Center for Neuroengineering had successfully trained two monkeys to use the electrical activity in their motor cortex to control the arm of an onscreen avatar without physically moving themselves, hinting at the possibilities of wearable thought – controlled prosthetic devices in the near future.

A recent case report from Cornell University represents the first successful demonstration of a BCI-controlled lower extremity prosthesis for independent ambulation, which might allow for a cheap, easy, and non-invasive option to getting paraplegics walking again. A wireless brain-machine interface developed by Neural Signals uses implantable electrodes to address Locked-In syndrome for jaw movements, a terrifying brain lesion which leaves patients aware but almost entirely without the power to move.

However fascinating these new developments seem to be, they are far from a commercial reality. The non invasive systems suffer from poor spatial and temporal resolutions, driving the quest to devise more powerful and usable brain recording devices. The medical community would benefit for sure, but possibilities are limitless – gaming, cursor control, brain timing and brain-to-brain communication to name a few. And who knows, it might also open an unethical dimension of hacking the brain!

The right product for emerging markets?

The last decade has witnessed a rapid shift of focus of large multinationals towards emerging markets, across diverse industrial domains. Apart from the obvious benefits of capitalizing on the new opportunities, much of this paradigm change has been driven by the need of fitting sustainability in their business models. The medical technology industry has joined the wagon as well, primarily driven by some prominent growth drivers in the emerging economies – changing medical technology landscape, improving healthcare delivery and financing and changing patient profiles with increased life expectancy.

The medical technology industry in these markets is extremely aggressive and split, with domestic firms mainly manufacturing low technology products and multinationals primarily importing high-end medical equipment. However, a new breed of home-grown mid-market innovators cognizant of the local needs, are shaking up the global competitive landscape with low-price, medium quality products whereas global giants have evolved from the distributer based business mindsets to setting up local manufacturing units. But because of insufficient knowledge of the clinical needs and usage patterns of consumers in the emerging markets, appended with the local competition barely visible to a multinational company headquartered in the U.S. or Europe, writing the requirement specification for the right kind of product is often a challenging task. GE Healthcare’s MAC 400 value-segment electrocardiography machine is a remarkable example of the right product which had the potential of converting a local community need into a viable and scalable business. Let us take a closer look at some of the salient features which could possibly define such a value segment product:

[Acceptable Quality] It does what it is destined to do, and does it well. The underlying technology allows the end-user to carry out its intended clinical use.

[Affordable] It is affordable by the ‘emerging market customer’.

[Appropriate] It serves a need and is very useful. It can be a basic product platform ripped off some of its premium features in response to the local needs. The usefulness indirectly infers that the product must be reliable and durable.

[Well-positioned] It is competitive but not an internal competitor. There should be clear lines of demarcation between the premium products and the value segment products so that the latter does not cannibalize the former.

[Innovative] It is nimble, evolving and innovative. The pressures of cost, pace and quality compel everyone to explore solutions outside conventional wisdom, which brings in the power of innovation. At its heyday, innovation can also become disruptive and give birth to a game-changing product altogether!

In the Indian context, it would be worthwhile to take a quick look at some of the home-grown value-segment product manufacturers, and learn their definitions of such products:

Company Product Segment Company Product Segment
 
Opto-Circuits Equipments, Interventional devices Sushrut Surgical Orthopedic Implants
Perfint Healthcare Soft tissue intervention Bigtec Labs Life Sciences
Poly Medicure Ltd Consumables SkanRay Diagnostic X-ray
Relisys Medical Stents, Catheters Trivitron Medical Cardiology, Imaging

The value-segment product for an emerging market is a paradigm, and perhaps can be better understood by analogies. The goal for such a product could then possibly be Mercedes-level quality and attractiveness, Toyota-level durability and margins, and Skoda-level prices.

The Big-Data in Healthcare: challenges & opportunities

The healthcare industry is witnessing an explosive growth in the volumes of digital medical data. Advances in digital imaging technologies and electronic patient record systems, combined with federally mandated data retention and retrieval policies are presenting healthcare IT professionals with a number of new challenges related to storing, managing and providing access to its medical data. According to IDC Health Insights’ 2010 EMR  and PACS  storage  survey, storage takes up a large percentage of  overall IT budget for providers, with a large portion of outpatient  centers (50%) and hospitals (57%) allocating more than 20% of their  IT budget to storage. One of the most compelling trends observed at the IBM’s Big Data Policy Event points to the fact that the amount of data generated per hospital will increase from 167 terabytes to 665 terabytes by 2015.

In the U.S., organizations that transmit an individual’s protected health information (PHI) across Internet applications or electronic systems are required to meet Health Insurance Portability  and  Accountability  Act  of  1996  (HIPAA)  requirements. In order to be compliant, healthcare IT solution providers must design their systems and applications to meet HIPAA’s privacy and security standards and related administrative, technical, and physical safeguards. Apart from confidentiality and robust access control protocols, the other hurdles in managing medical data lie in the following areas – long-term vendor viability, continuity of care through backup & disaster recovery solutions, rapid scalability & secured migration, multi-site & enterprise wide collaboration and of course affordability in terms of real-estate and infrastructural requirements.

Cloud computing holds the promises of reduced costs, pay-as-you-go services, and improved agility, allowing organizations to leverage external IT capabilities that they may not have in-house. However when it comes to medical data the top concerns for health IT administrators are security and availability, which could be mitigated through properly architected cloud frameworks. These increased burdens augmented with higher sensitivity in handling medical data have led to the development of specialized solutions tailored for the healthcare industry, by the leading service providers in this sphere – EMC, NetAPP, Amazon, HP, Hitachi, Intel, IBM etc. An additional area of concern in the context of healthcare industry is the rapidly evolving medical technology landscape. A close example would be in the field of personalized medicine, – where the next-generation genome sequencing technologies are rapidly churning out terabytes of data during standard gene annotation experiments, clearly signaling that we are only at the beginning of healthcare’s digital information explosion.

Despite the burden associated with the enormity of these datasets, the abundance of clinical data holds the potential of changing the course of healthcare as well. With advanced data mining techniques, large chunks of data can be leveraged for the identification of disease patterns, discovery of new drugs, optimization of methods of clinical care, and efficient management of patient flow. On the other end of the spectrum are initiatives such as Global Viral Forecasting (GVF) which are continually data hungry as they harness big data to prevent global pandemics before they start!