Hardware Certifications: An Introduction


A number of storage and virtualization companies such as VMware, Oracle, Microsoft, etc., are growing in the independent hardware vendor (IHV) ecosystem. Why do you need to get your hardware setup certified with the technologies of these companies? By certifying your products, you can get more customers as well as work with the high-quality support of these companies.

Albeit, it is not that easy to get your hardware certified with VMware, Oracle, or Microsoft. You need to work closely with them to ensure all necessary testing has been performed and the results are thoroughly reviewed. This arduous process ensures that your hardware is certified with them. At MSys, we have done quite a number of certifications for a few major clients.

At MSys, the certification process is exhaustive. We take care of all aspects of certification from testing to contacting the certifying authority to finalize the certification process. An example is Windows hardware certification, known as Windows Logo Program for Hardware (WHQL). This certification is to help you make products that customers trust and would like to buy. This certification is available for drivers, peripheral hardware, systems, or the hardware that you want for Windows.

The requirements of Windows hardware certification are available here. You can download the PDF files from that page to know in detail the system, device, and filter driver requirements for certification. The page also contains a PDF document that outlines hardware certification policies and processes.

For Windows hardware certification, you need to install Windows HCK (Hardware Certification Kit). This suite of applications contains the tools, processes, and tests to certify your hardware for Windows. As the next step, you need to set up your test server and clients, and you will be ready to test your hardware setup using HCK. Once done with the testing, you can submit the results to the hardware dashboard. As the final step of Windows hardware certification, you should manage the device metadata, bugs, error reports, profiles, etc.

MSys has done client projects for not only Windows hardware certification but also certification for Oracle and VMware. We have been thoroughly engaged with the client as well as the authority during the certification process in order to ensure that the project gets done on time and with little or no trouble to the clients.

Virtual Disk Service and Volume Shadow Copy Service: An Introduction

Virtual Disk Service (VDS) and Volume Shadow Copy Service (VSS) are specific Microsoft technologies; VDS extends the existing storage capabilities of storage systems and servers, while VSS is a dynamic Windows service that enables data backup even if the application that creates the data is running. VDS specification, however, is kind of old now. Starting with Windows 8 and Server 2012, Microsoft superseded the VDS with Windows Storage Management API. You can also read another blog entry detailing VDS and VSS that we published earlier.

The Virtual Disk Service (VDS) helps manage a large number of storage configurations, from single-disk desktops to external storage arrays. The service exposes an application programming interface (API).

vds vss diagram

Storage management activities performed by VDS include:

• Providing an API (application programming interface) to the existing volume and disk management features of Windows.
• Unifying volume management and hardware RAID management from within one API

VDS does not provide the following activities:

• Hardware subsystem management (temperature monitoring or monitoring of performance for disk arrays)
• SAN fabric management (for instance, HBA zoning and security)

VDS VSS Providers

There are two provider interfaces for VDS and VSS: software provider and hardware provider. A software provider is a host-bound program supported by a kernel mode driver in the storage IO stack. On the other hand, hardware provider is used to implement the methods that are used to manage a storage subsystem (which can be a hardware disk array or adaptor card that enables the creation of logical disks configured to enhance the performance, availability of data, and recovery of data).

At MSys, we have worked with at least five clients in implementing vds and vss hardware providers. These clients are LSI, Wasabi Systems, Starwind, Nasuni, and Pure Storage. MSys engineers worked to develop hardware providers for the storage servers of these clients. Over the years, VDS VSS hardware providers became one of the highest-rated expertise within MSys.

MSys’s work with these storage servers enabled them to showcase their infrastructure on par with industry-leading backup and restore products such as Tivoli Backup Manager. For the development of these hardware providers, the following technologies were used:

Languages: C++, COM
IDE: Visual Studio 2008, 2010, 2011
Platform: Microsoft Windows

Migrating Your Data and Applications to the Cloud

Cloud computing is intended to reduce the expenses of IT organizations by lowering capital expenditure by allowing them to purchase only the required amount of computing and storage resources. Today, due to the enormous advantages of cloud computing, many organizations are exploring how the cloud could be leveraged to make their enterprise applications available on an on-demand basis.

In the last few years, thousands of companies moved to the cloud, through public, private, or hybrid cloud offerings. Many others are considering moving to the cloud due to its enormous advantages.

Before you move to the cloud, it is important to look at the major advantages of cloud computing.

When it comes to migration to the cloud, you can take advantage of Microsoft’s Windows Azure, Google Cloud, Amazon AWS, Citrix, etc. Companies use these platforms to build websites, web apps, mobile apps, media solutions, etc.

Migrating to the cloud, you can potentially create more business profits by taking risks and encouraging experimentation. While risk taking in the past requires you to invest a lot in hardware and software, the cloud allows you to create an application on a completely scalable platform and get it out in the form of a service rather than selling licenses.

Although these advantages are there, migration may not be an easy task. For instance, enterprise applications are faced with strict requirements in terms of performance, service uptime, etc. Migrating them to the cloud requires you to analyze all these requirements very closely and come up with an in-depth migration plan that increases ROI.

Hardware resources required can be greatly minimized by cloud migration. Since pooled resources are better utilized, moving to the public cloud can dramatically decrease the need for in-house servers. This will also reduce physical floor space and power consumption. In addition, as mentioned above, migration will surely reduce operational and management costs. A number of solution and service providers in the cloud market can help you easily migrate at reduced cost structure. MSys has also been providing the same type of migration service for years.

Things to Check

An important thing to consider while migrating to the cloud is analyzing the changes required in the architecture of the application being migrated. In many cases, the application must undergo a complete architecture change to be fit for the cloud. A service-oriented application works well with the abstraction of cloud services through application programming interfaces (APIs).

Additionally, you should also seek whether the application needs to be altered to take advantage of the native cloud features. Direct access to elastic storage, management of interfaces, and auto-provisioning services are some of these cloud features you may want to take advantage of.

Migration Roadmap

During the transition, you should also ensure that the level of service provided in the cloud is comparable or better than the service provided by traditional technical environments. Failure to comply with this requirement is the result of improper migration to the cloud. And it will result in higher costs, loss of business, etc., thus eliminating any benefits that the cloud could provide. A few steps involved in the migration of an application to the cloud include:

migration roadmap

1. Assessing Your Applications and Workloads

This step allows organizations to find out what data and applications can be readily moved to the cloud. During this phase, you can also determine the delivery models supported by each application. It of course makes sense to sort the applications to be ported based on the risk factor, especially the ones with minimal amount of customer data or other sensitive information.

2. Build a Business Case

Building a business case requires you to come up with a proper migration strategy for porting your applications and data to the cloud. This strategy should incorporate ways to reduce costs, demonstrates advantages, and deliver meaningful business value. Value propositions of cloud migration include shift of capital expenditures to operational expenditures, cost savings, faster deployment, elasticity, etc.

3. Develop a Technical Approach

There are two potential service models to migrate an existing application—Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). For PaaS migration, the application itself has to be designed for the runtimes available in the target PaaS. However, for IaaS, the requirements are not that much.

4. Adopt a Flexible Integration Model

An application that gets migrated to the cloud might have already existing connections with other applications, services, and data. It is important to understand the impact of these connections before proper migration to the cloud. The integration model to be adopted may involve three types: process integration (an application invokes another application to execute a workflow), data integration (integration of the data shared among applications), and presentation integration (multiple applications sharing results over a single dashboard).

5. Address Security and Privacy Requirements

Two of the most important issues faced in migration to the cloud are security and privacy issues. Especially in the case of applications that deal with sensitive information, such as credit card numbers and social security information, the security should be high. Several issues to be addressed are there, including the difficulty for an intruder to steal any data, proper notifications on security breach, reliability of the personnel of the cloud service provider, authorization issues, etc.

6. Manage the Migration

After thoroughly analyzing the various benefits and issues associated with migration, planning and execution of the migration can happen. It should be done in a controlled manner with the help of a formal migration plan that tracks durations, resources, costs, and risks.

Our Expertise

In cloud migration, we have industry-wide expertise in SaaS, PaaS, IaaS. In IaaS, we have worked on private and public clouds with infrastructures such as OpenStack, Amazon AWS, Windows Azure, Rackspace, VMware, Cloupia, HP Cloud, etc.

Clogeny, MSys’s subsidiary company, has worked on hybrid cloud migration projects for leading clients in server imaging and datacenter provisioning. We have helped add support for several public vCloud Director implementations, including Bluelock, AT&T, Savvis, and Dell. In addition, we have architected hybrid cloud migration appliance for VMware vSphere. In enterprise Java PaaS, we have worked on VMware vCloud Director, AWS, HP Cloud, and Rackspace.


Cloud computing provides a few key benefits for companies. Migration to the cloud may create a better, modern business model for most tech companies.

Q&A on Mobitaz Android Test Automation Webinar

On the fifth of June, 2014, we conducted a webinar on Mobitaz, MSys’s Android test automation tool. In this webinar, a number of professionals from various companies on testing and general quality assurance participated. Needless to say it was a big success.

It can be challenging for a quality analyst to choose the right mobile-functional-testing tool for his testing purposes. Manual testing or the use of automation tools with limited testing capabilities can be a hindrance in expediting the QA process.

Mobitaz (MSys Android test automation tool) team at MSys explored the need for a device- and an OS-agnostic mobile testing tool which can give the assurance to a QA team that testing isn’t compromised and too much time is not taken in a QA cycle. Discussions made in the Mobitaz webinar will bring a massive change in the traditional automation technique/solutions.

Some of the interesting discussions initiated by the quality assurance personnel from leading companies are as follows:

Parallel execution

Question: Does it mean that I can run a test made for Kit Kat on Gingerbread and ICS at the same time?

Answer: Yes. Concurrent playback on Kit Kat, Gingerbread and Ice Cream Sandwich can be achieved through Mobitaz.

Question: So, Mobitaz adapts to different android objects that differ between OSes? Like progress bar between Gingerbread and Kit Kat?

Answer: Yes. This is something unique about Mobitaz. The tool can record a test case once and play it back across any Android device or OS version. Mobitaz has the intelligence to recognize objects with different Android versions. Through this capability, it can make successful parallel test executions.


Q: What are the advantages of this Android test automation tool over other mobile automation tools in the market?

A: We compare Mobitaz directly with other tools which offer a lab-based solution. A few of the advantages over other mobile testing tools are:
• Support for
o Android Custom components
o Android Web-View components
• Parallel execution
• Testing on real devices without rooting
• Detailed reporting with easy option to export and share to PDF format
• Key measurements of resources such as battery, CPU, memory etc.
• Mobile functional testing for Android versions from Gingerbread to the latest version
• Simplified licensing model
• Cost-effectiveness

Script-less Testing

Q: Does Mobitaz Android test automation tool require any scripting knowledge to create, execute, and generate reports?

A: No. Mobitaz is a script-less test automation tool and does not require any programming knowledge for functional testing of mobile apps. Mobitaz has intelligence to manage test cases, through features such as Object Repo, Test Case Editor, Reports, etc.

Eight Tips to Be More Effective in Agile Software Testing

Agile software development happens fast and code releases happen more frequently. Testing in such an environment is very important for coming up with accurate code that works. How does a programmer ensure quality of the code? In agile environment, there are three major challenges:

  • Gathering the requirements and the number of hours committed
  • Creating short-term releases
  • Keeping scrum short for more time for code inspections

As an agile software tester, you should be very proficient with the tools you use. Here are eight tips to be more effective in agile software testing.

1. Character Traits of an Agile Tester

There are a few character traits and mindsets you should be in for being a successful agile tester. Being passionate, creative, and unafraid is important for an agile tester. The agile tester should have soft skills in management, communication, leadership, etc., as well. These skills will help you envision the client’s expectations before the delivery of the product.

2. Understanding the Data Flow

When you know how the data travels inside your application, you are better able to analyze the impact of component failures and security issues. Hence, recognize how the data is used within the application early on in order to report bugs and defects faster.

3. Analyzing the Logs

In agile development, understanding the defect that causes an issue in the application under test involves log analysis. Application logs contain a great deal of information about the system-level architecture of the application. Some of the errors that the tester needs to know about are called “silent errors,” which means the end user doesn’t perceive the effect of the error. Log analysis helps you better spot silent errors as well as work more efficiently with the development team.

4. Risk- and Change-Based Testing

In agile development, development happens on the fly as does testing. The go-to-market time is all that matters, and the teams work together to achieve the best go-to-market time. When the application gets modified, you, the tester, need to understand which parts of the application are being changed. Also, you need to know the overall effect of the change to the final application.

5. Understand the Business Objectives

Agile tester is essentially the end user of the product. Hence, you should know how end users use the product. In order to evaluate your testing strategies, focus on the key areas or parts of the application that an end user is more likely to use. Create separate strategies for product architecture and end users. Also, this end-user-specific categorization allows you to report bugs based on the application’s business objectives, i.e., prioritizing the defects. At the end of the day, meeting end-user requirements is what any business needs. Based on the user stories, QA teams prepare the acceptance criteria.

6. Browser Tools

Browser plugins and tools may be highly effective for agile testers sometimes. For instance, Google Chrome and Firefox come with developer tools in-built to allow testers immediately spot errors. Also, there are third-party browser plugins such as FireBug that testers can use.

7. Requirement Repositories

Understand what type of agile development strategy your organization uses—Adaptive Software Development (ADP), Agile Unified Process (AUP), Kanban, Scrum, etc. Documentation of test cases and scenarios that the development and testing team create together is very important. Over time, the requirements and test scenarios are gathered into a repository-style system, from which a tester can get a lot of information.

8. Test Early, Often, and Continuously

Exploratory Testing (ET) is a practice in which testing is instantaneous. This is very important in agile development. Many testing professionals believe that the testing should be as early, often, and continuous as possible for proper application delivery. All types of testing—functional, load, etc.,—should be put within the project plan.


In agile software development, rather than the end-product, the development stages are important. Hence, testing is an integral part of the development process. In the early days of software testing, the quality assurance personnel did not have high level access to what is being tested or the results. With agile movement, the software companies and professionals have a more real-time view of the testing environment and scenarios. In agile development, there are shorter iterations leading to smaller test cases. Using a good test automation solution can be helpful in coming up with faster builds.

In order to provide a quality product to customers in a short delivery time schedule, MSys opted for agile testing approach from the conservative waterfall or V-Model, which paved way for us to address the continuously changing requirements and quality feedbacks of the customers.

Big Data and Your Privacy: How Concerned Should You Really Be?

Today, every IT-related service online or offline is driven by data. In the last few years alone, explosion of social media has given rise to a humongous amount of data, which is sort of impossible to manipulate without specific high-end computing systems. In general, normal people like us are familiar with kilobytes, megabytes, and gigabytes of data, some even terabytes of data. But when it comes to the Internet, data is measured in entirely different scales. There are petabytes, exabytes, zettabytes, and yottabytes. A petabyte is a million gigabyte, an exabyte is a billion gigabyte, and so on.

A Few Interesting Statistics

Let me pique your interest with a few statistics here from various sources:

  • 90 percent of data in existence in the world was created in the last two years alone.
  • Let’s look at Facebook for instance, there are 54 million pages, and every twenty minutes a million links are shared, two million friend requests happen, and three million messages are sent. On the top of these, there are over 81 million fake Facebook accounts.
  • The reason why Amazon sells five times Wal-Mart, Target, and Buy.com combined is because the company steadily grew to be of 74 billion dollar revenue from a miniature bookseller by incorporating all the statistical customer data it gathered since 1994. In a week, Amazon targets close to 130 million customers—imagine the enormous amount of big data it can gather from them.

Google’s former CEO and current executive chairman, Eric Schmidt, once said: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” The significance of this statement is evident when you realize the magnitude of data that the search giant crunches every second. In its expansive index, Google has stored anywhere between 15 to 20 billion web pages, as in this statistic.

Google index

On a daily basis, Google processes five billion queries. Beyond these, through numerous Google apps that you continuously use, such as Gmail, Maps, Android, Google+, Places, Blogger, News, YouTube, Play, Drive, Calendar, etc., Google is collecting data about you on a huge scale.

All of this data is known in the industry circles as “big data.” Processing such huge chunks of data is not really possible with your existing hardware and software. That’s the reason why there are industry-standard algorithms for the purpose. Apache Hadoop, which Google also uses, is one such system. Various components of Hadoop–HDFS, MapReduce, YARN, etc.–are capable of intense data manipulation and processing capabilities. Similar to Hadoop, Apache Storm is a big data processing technology used by Twitter, Groupon, and Alibaba (the largest online retailer in the world).

The effects and business benefits of big data can be quite significant. Imagine the growth of Amazon in the last few years. In that ginormous article, George Packer gives a “brief” account of Amazon’s growth in the past few years: from “the largest” bookseller to the multi-product online-retail behemoth it is today. What made that happen? In essence, the question is what makes the internet giants they are today? Companies such as Facebook, Google, Amazon, Microsoft, Apple, Twitter, etc., have reached the position they are today by systematically processing the big data generated by their users–including you.

In essence, data processing is an essential tool for success in today’s Internet. How is the processing of your data affecting your privacy? Some of these internet giants gather and process more data than all governments combined. There really is a concern for your privacy, isn’t there?

Look at National Security Agency of the US. It’s estimated that NSA has a tap on every smartphone communication that happens across the world, through any company that has been established in the United States. NSA is the new CIA, at least in the world of technology. Remember about PRISM program that the NSA contractor Edward Snowden blew the whistle on. For six years, PRISM remained under cover; now we know the extent of data collected by this program is several times in magnitude in comparison to the data collected by any technology company. Not only that, NSA, as reported by the Washington Post, has a surveillance system that can record hundred percent of telephone calls from any country, not only the United States. Also, NSA allegedly has the capability to remotely install a spy app (known as Dropoutjeep) in all iPhones. The spy app can then activate iPhone’s camera and microphone to gather real-time intelligence about the owner’s conversations. An independent security analyst and hacker Jacob Appelbaum reported this capability of the NSA.

NSA gets a recording of every activity you do online: telephone and VoIP conversations, browsing history, messages, email, online purchases, etc. In essence, this big data collection is the biggest breach of personal privacy in human history. While the government assures that the entire process is for national security, there are definitely concerns from the general public.

Privacy Concerns

While on one side companies are using your data to grow their profit, governments are using this big data to further surveillance. In a nutshell, this could all mean one thing: no privacy for the average individual. As far back as 2001, industry analyst Doug Laney signified big data with three v’s: volume, velocity, and variety. Volume for the vastness of the data that comes from the peoples of the world (which we saw earlier); velocity to mean the breathtaking speeds it takes for the data to arrive; and variety to mean the sizeable metadata used to categorize the raw data.

What real danger is there in sharing your data with the world? For one thing, if you are strongly concerned about your own privacy, you shouldn’t be doing anything online or over your phone. While sharing your data can help companies like Google, Facebook, and Microsoft show you relevant ads (while increasing their advertising revenues), there virtually is no downside for you. The sizeable data generated by your activities goes into a processing phase wherein it is amalgamated to the big data generated by other users like you. It’s hence in many ways similar to disappearing in a crowd, something people like us do in the real world on a daily basis.

However, online, there is always a trace that goes back to you, through your country’s internet gateway, your specific ISP, and your computer’s specific IP address (attached to a timestamp if you have dynamic IP). So, it’s entirely possible to create a log of all activities you do online. Facebook and Google already have a log, a thing you call your “timeline.” Now, the timeline is a simple representation of your activities online, attached to a social media profile, but with a trace on your computer’s web access, the data generated is pretty much your life’s log. Then it becomes sort of scary.

You are under trace not only while you are in front of your computer but also when you move around with your smartphone. The phone can virtually be tapped to get every bit of your conversations, and its hardware components–camera, GPS, and microphone–can be used to trace your every movement.

When it comes to online security, the choice is between your privacy and better services. If you divulge your information, companies will be able to provide you with some useful ads of the products that you may really like (and act God on your life!). On the other hand, there is always an inner fear that you are being watched–your every movement. To avoid it, you may have to do things you want to keep secret offline, not nearby any connected digital device–in essence, any device that has a power source attached.

In an article that I happened to read some time back, it was mentioned that the only way to bypass security surveillance is removing a battery from your smartphone.

The question remains, how you can trust any technology. I mean, there are a huge number of surveillance technologies and projects that people don’t know about even now. With PRISM, we came to know about NSA’s tactics, although most of them are an open secret. Which other countries engage in such tactics is still unknown.

Advantages of DevOps Continuous Delivery Model

You may be wondering what DevOps means. According to Wikipedia, it’s a portmanteau of Development and Operations, two integral parts of any software firm. Development teams work hand in hand in coming up with a software product or service, and the Operations gets it into production.

In essence, these two parts of the firm face slightly different challenges. For instance, the software market is quite expansive, and when it comes to a company such as MSys, it is also quite dynamic. Developments happen on a daily basis in storage, embedded systems, telecommunications, quality assurance, virtualization, etc. Operations team has to have first-hand knowledge on these developments and should be in a position to react quickly to market dynamics. Operations also has to keep hardware systems in pristine condition to support development lifecycles.

When it comes to software development, challenges may be more technical–related to timely releases, quality assurance of the releases, additional requirement gathering, and constant communication. However, when we analyze deeply, we may feel that the challenges are more or less alike or overlapped. This is one of the reasons why companies largely have decided to adopt a model in which the development side works hand in hand with the operations side. MSys also has a similar structure, wherein, the operations team has sophisticated knowledge of the development lifecycle, and products.

A few stereotypes have been attributed toward Development and Operations. For instance:

  • Developers are constantly lazy and not interested in deployment and operations.
  • Operations always blame developers for failure of the application or deployment.
  • Operations always complain they are kept out of loop in feature enhancements and new developments.
  • Operations are not concerned about code, and developers are not concerned about business growth.

In a modern IT business, these differences in opinion can be costly. A hand-in-hand approach, wherein developers and operations work together can have many benefits.

1. Business Change & Growth

Especially in this fast-paced world, business changes happen quite often. Keeping abreast with these changes is significant to the growth. And the team that works with clients, manages all activities, and regulates business is the operations team. If it doesn’t get adequate help from Development, growth can be stunted. This is one of the reasons why DevOps is expected to bring enormous business growth in the coming days.

2. High Quality Releases More Frequently

This is an objective of any software development firm, isn’t it? High value earlier! It’s an objective of DevOps too. Frequency in high quality releases can be easily achieved with constant communication between Development and Operations.

3. Everyone Knows What’s Going On

Shared version control systems in which operations teams have inside knowledge of software lifecycle can be achieved using certain version control systems. Building and deploying in one step is possible today. This ensures what changes were applied, when, and by whom.

4. Simple Code Improvements

In agile development environment, frequency and magnitude of changes can both be small. However, these minor changes can sometimes significantly impact a product. A DevOps team can manage minor code modifications and improvements more efficiently through Continuous Integration.

5. Improves Interpersonal Relationships

If there is no longer a shift between teams, there is no longer difference in opinion. In essence, people working together create a better community and interpersonal relationships between each other. This is one of the ways DevOps improves a company’s culture. This in turn improves each team’s outlook toward any failure.


Overall improvement of a company’s products and services is achievable with DevOps. The aim is still on improving the delivery model, satisfying more customers.

How Embedded Systems Transforms the Healthcare Industry?

Imagine how cumbersome healthcare used to be in the past. Back then, a person not feeling well had to approach a doctor, who then proceeded to prescribe medicines based on his external symptoms. How accurate can the diagnosis be in such cases? The reason why a few decades ago a disease that we take for granted today could kill masses was because the diagnosis wasn’t thorough. Then the technology advanced. We got X-ray, ECG, EEG, MRI, CT, pulse oximeters, GlucoWatches, electronic defibrillators, and a large number of sophisticated gadgets (embedded systems) and acronyms that the general public has no idea about. Now, the technology is even more advanced. New microchips, nanotechnology, and embedded systems have managed to revolutionize the healthcare industry.

Look at General Electric, the multi-billion dollar vendor of all kinds of electric systems. GE is the premier provider of medical embedded systems in the world. All kinds of technologies–from scanning machines, imaging systems, and diagnostic equipment–are there in GE’s range. Behind all these advanced systems is embedded technology. Take a look at this image of a huge PET scanner from GE, a perfect example of an embedded system:

GE PET scanner

[Image Source: General Electric]

I was perusing the acme of technology IEEE Spectrum, and I stumbled upon an article that describes what the future has in store for us. In the next few years, newborn babies will get tiny sensors within the first few minutes of their birth. A chip that is planted in the body of the infant continuously monitors its health condition, and the biometric data generated and stored in the cloud by the child through this chip within two years will be more than the entire amount of data created by everyone combined in the world today. In essence, this data can be used by medical professionals to track every aspect of the health of the child.

Soon, medical gadgets will turn out to be more glittering and sophisticated than the ones in the books of Ian Fleming. A few days ago, BBC reported of a gadget–a tiny ring–that reports and catalogs a person’s medical conditions. This is a perfect wearable that comes handy in emergency situations as a microchip embedded inside this ring alerts paramedics during an emergency.

health technology ring
[Image Source: BBC]

You have probably heard already about electronic tattoos that dilate with your skin. These temporary electronic tattoos are powered by solar energy and replace those bulky gadgets, such as a pacemaker, to monitor the health conditions of individuals. Since the material used is stretchable, sturdy, and highly flexible, you will not even know you are wearing a health monitor.

When it comes to advanced robotics for intricate surgical procedures, check out the da Vinci Surgical System, manufactured by Intuitive Surgical, Inc. This is the only robotic surgery system with approval from the US Food and Drug Administration (FDA).

da Vinci Surgery system
[Image Source: Intuitive Surgical, Inc]

But now, a team of geeks from University of California, Santa Cruz and the University of Washington have successfully created a set of seven robotic surgery systems for use by medical research labs across the US. These systems use open-source approach for software development, cutting the cost of ownership to the bare minimum.

When it comes to embedded technology within gadgets, you probably know about a number of technologies. There are real-time operating systems in the embedded world that find applications in military-grade equipment. Examples like QNX (acquired by BlackBerry), OSE, VxWorks, ucLinux, and LynxOS come into mind.

From small embedded systems that monitor the heart rate or identifies a blockage in an artery, the technology seeped into intricate surgical procedures. It is quite possible that in the future you can own your own robot doctor, with or without remote assistance from a real one. As the functionality of embedded systems in use in healthcare increases, one thing that decreases is their size. A recent article in Discover suggests a possible device–a “microbot”–about a decade from now that can be inserted into your body by making a tiny surgical incision. The microbot can travel through your blood vessel and reach the area of concern. It can fix minor issues, such as blockage in an artery, and collect tissue specimens for testing. A tiny camera attached to this device can send high-definition images and videos to the doctor about what is happening in your body. The futuristic microbot can be powered by a tiny motor about the width of two human hairs.

As you can see, medical technology has advanced quite a bit with the help of embedded technologies. Stretching the limits of the Moor’s law, semiconductors, processors, and chips are going down in size in an exponential fashion, while the number of transistors in each chip is growing by leaps and bounds. SoCs, embedded operating systems, and software that power these devices undergo some serious R&D. MSys also has quite a bit of experience in embedded technologies and real-time OS (RTOS) making us a perfect innovator for the technology of tomorrow.

Related Article:

Internet of Things: An Introduction

Storage, Virtualization, Testing, Test Automation, Embedded Systems