Wednesday, November 26, 2014

38. WHAT IS QUICK CHARGE 2.0 FROM QUALCOMM?                                

We see it in Qualcomm's tweets. We see it in some published articles. We see it in fine prints from a few smartphone makers. It says "fast charging using Quick Charge 2.0." What is it?

In my earlier posts, I spoke about two pieces that are essential for fast charging. The first piece is power delivery from the wall socket to the mobile device, and the second piece is battery management that ensures optimal operation of the battery itself under the stressful conditions of high charging power.

Well, Qualcomm's Quick Charge 2.0 is about the first piece, and only the first piece, i.e., ensuring power delivery to the mobile device and the battery. It does not perform any meaningful battery management functions and certainly does not ensure the battery's health.

What is its purpose and how does it work? Standard AC adapters and power delivery mechanisms into mobile devices have been limited to 5V and 1.6A. In other words, the AC adapter takes 120V/240V at the input and outputs a maximum of 8W of power at 5V. This has been the de facto standard for several years. This power output was plenty sufficient when mobile devices had batteries that were small, about 1,500 mAh or less. But as tablets first came on the market with batteries in excess of 4,000 mAh, and smartphones followed with batteries in excess of 2,000  mAh, it became clear that higher power output was needed from these AC adapters.

Naturally, the easy answer would be to just increase the current above 1.6A but still maintaining an output voltage of 5V. But this is highly impractical. First, the USB cable cannot be realistically changed -- even though the new USB 3.0 standard aims at changing the cable dimensions, it will be years before such new standards make it through the industry. In other words, power delivery had to co-exist with present-day USB cables. Additionally, if you recall from your high school physics class, electric losses go as the square of the current. So increasing the current from 1A to 2A, for example, quadruples the losses, all going into heat. That's not good! So if one cannot raise the current, then the next logical answer is to raise the voltage.

I will digress briefly here. Take a look at how power is transmitted from the power stations to your house. You will find, somewhere near your house, a large power switching station whose primary responsibility is to step down the voltage from the transmission lines, typically at 66 thousand volts, down to what ultimately becomes 240V at your house. Transmission lines use 66 kV precisely to limit the losses and use cables of reasonable dimensions and cost.

So QC 2.0 establishes a new methodology to deliver current at or near 1.6A, but raise the voltage to 9V or 12V, up from 5V. That effectively raises the maximum to nearly 20 Watts. Plenty for now. So you can see, there is nothing earth shattering about the concept. In fact, Apple had been doing precisely this on their Macs for many long years.

How does it work? The question specifically is: If the AC adapter can deliver 5, 9 or 12V at its output, how will the mobile device know which voltage to select? If you take a look at the terminus of a USB cable, you will see four electrical contacts. Two pins are for power and ground, and two pins are for data (called D+ and D-). The communication between the AC adapter and the mobile device happens over this pair of data wires. The combination of the D-/D+ voltages is the signal between the two devices on the choice of power voltage requested from the AC adapter (see picture below). If this voltage "wiggling" over the D+/D- is not present, then the mobile device defaults to the standard 5V. That's really it!

So why is Qualcomm making such a big fuss of it? Of course, there is a marketing element, which is welcome. It's great to see Qualcomm throw its weight behind the ecosystem of fast charging, even when QC 2.0 solves only one half of the fast charging problem. By now, you know that Qnovo solves the other half of this problem, the battery.

So to sum it up, if your phone is capable of QC 2.0, then it is great. You are capable of fast charging. But before you start fast charging, make sure that your battery will not get damaged in the process.

Happy Thanksgiving!

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Saturday, November 22, 2014

37. CHEATING THE REPORTED CHARGE TIME                                    

I doubt that anyone will resist fast charging from becoming standard. Once a norm, I also doubt anyone will even refer to it as fast charging. The comparison then becomes one between "normal charging" vs. "slow charging." This is a day we should all look forward to.

It merits a post on measuring and reporting charge times. Given the lack of standards, the temptation is substantial to "round down" charge times and project numbers that are better than actual. At the end, of course, consumers are savvy and will not be fooled by such gimmicks.

First, how can you measure your own charge time? It's quite simple. Discharge your smartphone or mobile device battery to near zero (say 1% of remaining charge). Dim your screen if you must keep it on, set your device on "airplane mode", and shut down the GPS (location finder) and all operating apps; you want all the power to the phone to go into charging the battery. Mark the start time, connect your device to the charging AC adapter, then record every 5 or 10 minutes the percentage read by the fuel gauge. After your mobile device reports that the battery is fully charged, you will make a chart: on the vertical axis, show the measured percentage of charge in the battery, and on the horizontal axis, show the recorded time. You will get a graph that looks like the one below.

This particular chart includes data specific to a battery with a capacity of 2,500 mAh charging at a rate of 0.7C using a 9-Watt AC adapter. You will notice that the charge increases monotonously for the first hour or so, then near a charge level of 90%, it slows down quite a bit. For the readers who are technically inclined, this transition happens when the battery voltage reaches its maximum value of 4.35V, and the charging switches to a slow constant-voltage phase.

Let's plot this chart in a little different way. The following chart shows, for the same battery above, the amount of time required to charge the battery in small increments. It takes about 8 minutes to charge the first 10%, and all subsequent increments of 10% until the battery reaches the transition point. And then, the charge rate really slows down to a dismal 35 minutes for the last 10%. Worse yet, it takes nearly 15 minutes to charge from 98% to 100%. This is quite a bit given that the entire charge time is about 2 hours.

Charge time in minutes for increments of battery charge levels

This is precisely when the opportunity and temptation to cheat are greatest. Many smartphones actually declare they are full when they are near 98% of charge thus saving 15 minutes from the actual charge time. Now, one may reasonably argue that the difference between 98% and 100% is miniscule, and that is indeed true. But beware of making comparisons among different mobile devices by looking at their 100% charge times. They are likely to be inconsistent. Instead, you are better trusting the charge times at lower charge levels, say to 50% and to 80%. There is far less room to manipulate these figures.

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Friday, November 21, 2014

36. DON'T ALWAYS TRUST YOUR ENGINEERS                                                  

I described earlier the two pieces of the fast charging puzzle. There is first the power delivery from the wall socket to your mobile device, and then there is managing the charging power into the battery so it does not get damaged.

The first piece of the puzzle is all about power circuitry, something electrical engineers enjoy and revel in. But the second piece of this puzzle is about managing the chemistry in the battery. Here, I have some bad news: electrical engineers, even the most successful ones, are likely to have scored a C- or a D+ on their chemistry college classes; in other words, they are not comfortable with chemistry. Like all of us humans, we drift towards our comfort zones, so chemists drift to chemistry, and electrical engineers drift to building electronics. Getting the two disciplines to work together takes hard work and lots, really lots, of discipline. As a result, it is rare to find interdisciplinary development in the mobile world, especially between battery chemistry and electronics. We, at Qnovo, happen to be one of these rare exceptions.

Well, enough self-promotion; let's get back to real life. What happens if overzealous electrical engineers decide to charge a battery as fast as they can. After all, they are electrical engineers and know how to design circuits that can pump a lot of energy in the battery -- some even give it great names like Quick Charge. The result is simple: They will destroy the battery (if you use CCCV charging). The data in the next graph illustrates the extent of the damage.

The graph is a standard capacity fade curve for a battery. In this case, it is a 2,400 mAh lithium-ion polymer battery from one of the leading manufacturers. The vertical axis shows the remaining capacity of the battery (in Ah) as the battery undergoes cycling (shown on the horizontal axis).  As the battery is repeatedly charged and discharged, it loses its ability to hold charge, hence the degrading curve. This graph shows the capacity degradation for four distinct charge rates, varying from a slow 0.5C up to a superfast 1.5C, all using CCCV. At 0.5C, the battery charges in about 3 hours, and at 1.5C, it charges in a little over an hour.

So what is the graph telling these overzealous engineers? It says that at the slow charge rate, the user will comfortably obtain over 600 cycles of operation. That's plenty to cover a year and more. But the superfast charge rate of 1.5C does so much damage to the battery that the battery will barely last a couple of months. In other words, fast charging with CCCV is bad! Isn't time to use new methodologies that give us fast charging without damaging the battery.

The moral of this post: Don't trust everything your electrical engineers tell you...well, that is unless they really scored A's on their chemistry college courses!

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Thursday, November 20, 2014

35. I WANT TO CHARGE WITH A SOLAR CELL - OR NOT!                        

I am sure many have wondered about the possibility of pasting a solar cell on the back of their mobile device to charge their battery. The idea is elegant but, it turns out, is not terribly practical. We will go through the analysis together.

First, let's establish the practical size of a solar cell that could reasonably fit on our mobile device. For the sake of simplicity, I will assume that we can somehow glue and connect one such cell on the backside of our mobile device. The table below provides the approximate dimensions of common mobile smartphones.

Next, let's examine the power flux from the sun; in other words, how much solar energy one would reasonably expect to receive. Again, I will simplify. I will assume sunny days and the surface is perpendicular to the sun rays.

The Earth receives from the Sun radiation at approximately 1,340 Watts for each square meter of surface (W/m2). This is called the solar constant. For comparison, it is about as much radiating power as you will get in your microwave oven. But that's in space, right at the edge of our atmosphere. By the time the radiation reaches the surface of our planet, it is now down to an average of approximately 340 W/m2, or less than ¼ of its value in space. The table above calculates the average amount of power that this hypothetical solar cell will deliver to the mobile device, again, assuming a sunny day -- once again, this means outside, in direct sunlight with your mobile device facing perpendicular to the sun rays.

The math tells that we should not expect more than 2 to 5 Watts of peak available charging power. In reality, once your mobile device is indoors or in the shade, or worse yet, in your pocket or purse, that amount of energy will drop by a factor of 100 or even more.

So, remaining optimistic, and willing to sit in direct sunlight while my mobile device charges, how long would it take to charge each of these mobile devices given this amount of solar charging power? The following table calculates these approximate times to full charge (100%). On average, it takes about 4 to 5 hours of charging. On the positive side, you will walk away with a nice tan!

Of course, the question can be posed a little differently: How much surface area would one need in order to charge their device using a solar cell in a reasonable amount of time? Assuming that somehow this solar cell can be mounted outside in direct sunlight and a charging wire can be brought inside, and assuming again that you are willing to charge your device exclusively during the peak day hours near noon, then one requires a surface area about 200 to 300 cm2, or a square roughly about ½ to ¾ foot on each side.

I will leave you with some food for thought. The power levels available from a solar cell are about the same as the power levels that one gets from wireless charging pads. In other words, if you put your mobile device on a wireless charging pad, it will take 4 to 5 hours to charge your device. Of course, you are not tanning at the same time, but it is awfully slow! Now why is it that fast charging is not yet a standard?

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Tuesday, November 18, 2014


We know and complain about the dismal charging times of modern smartphones! But why is it that these marvels take several hours to charge? That's what we will explore today.

There are two pieces to this puzzle. The first piece relates to power delivery, and the second piece relates to maintaining the health of the battery.

Let's start with power delivery. That's the circuitry, cables and necessary components to deliver sufficient electric power from the wall socket to the mobile device. Specifically, this includes three key elements: i) the AC adapter, ii) the USB cable, and iii) the circuitry, integrated circuits and other passive components such as inductors, all residing inside the mobile device itself.

Presently, the AC adapters are rated 5 Watts, taking 120V from the wall and delivering an output of 1A at 5V into the smartphone. Charging faster means more power. How much more? With 3,000 mAh becoming common, we are looking at a minimum of 15 Watts.  Therefore a 15-Watt AC adapter is required, but most importantly, it has to be at a cost that is nearly equal to the earlier and smaller AC adapter. 

We also need a USB cable that can take the power from the AC adapter to the device. Standard USB cables are typically rated at or near 2A. This automatically says that the higher power should be delivered at a higher voltage. This is very similar to how power is transmitted from large power stations to your house. Power leaves on high-voltage transmission lines (these are the large towers that you see in open fields or on top of hills). When the power gets close to your house, the voltage is then stepped down to the standard 120V that you are familiar with -- you can see these little transformers perched on top of utility poles up or down your street. 

It is a similar concept in the mobile device. The new 15-Watt adapter operates at higher voltages, typically 9V or even 12V, up from the standard 5V. Take 120V from the wall, then step it down to either 5V (the old way) or 9V or 12V. Of course, the circuitry within the smartphone itself now has to be able to take the 9V or 12V. Voila! These AC adapters, USB cables and proper internal circuitry capable of operating at the higher voltages are just getting rolled out in the first hurdle in fast charging is solved.

The second piece of the puzzle is the health of the battery. How do we make sure that the battery will operate properly for the life of the mobile device when we dump 15 Watts of power into it? This has been the challenge of the industry -- and fortunately, one of several problems that we have solved here at Qnovo.

The problem with dumping 15 Watts of charging power into the battery is that it will destroy the battery's cycle life. This older post will explain what cycle life is. Poor cycle life manifests itself with rapid loss of battery capacity and increasing warranty returns. That's when the consumer (you!) notices a rapid deterioration in the battery life from one day to the next, and you start b*$%!ing.

Right now, the battery manufacturers are struggling with making the cycle life specifications, albeit trying to deny it. They are promising new generations of batteries that can take the higher charging power, but in reality, they are failing to deliver the requisite performance. This highlights the industry's idiom about "liars, liars and battery suppliers." The failure to deliver is due to fundamental limitations about the underlying physics of charging the battery. Instead, the battery manufacturers are scaling back the battery capacity (i.e., reduce the number of mAh) in order to charge faster, but that's not what consumers want. We want high battery capacities AND faster charging, not a choice of one or the other. 

The good news is that this problem is also being solved and we expect these solutions will also be rolling out in 2015. In other words, expect to start seeing fast charging becoming increasingly common some time next year. Yet I constantly wonder, why in the world did it take this industry so long!

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Friday, November 14, 2014


You have heard it before from me: battery performance is getting seriously challenged. The battery vendors are having a difficult time, and I am being kind, in meeting the expectations of the mobile device manufacturers. Specifically, it is getting extremely difficult for the battery vendors to simultaneously deliver high capacity, fast charging, and long cycle life. But why is that? 

Let me digress briefly here and examine the average cruising speed of commercial airplanes over the past century. One will rapidly notice that airplanes have not gotten much faster in the last 50 years. They are more fuel efficient; they travel longer distances in greater comfort (well, if you pay for it); and they are far safer than they have ever been...but they are surely not faster. That's because it gets very expensive to travel faster than the speed of sound (667 knots). The Concorde is a stark reminder of this economical limit. Chuck Yeager broke the physical limit of the sound barrier decades ago, but we can't really change the economics around this limit.

Lithium-ion batteries are rapidly approaching a similar limit for energy density. Short of a major breakthrough in a new material system, we are staring at a difficult barrier somewhere between 600 and 700 Wh/l. With that, I mean achieving large-scale manufacturing with affordable economics that match the requirements of the mobile device industry. A key limiting factor is now the carbon anode material. It is possible that new carbon-silicon composite anodes can change this equation, but for the foreseeable future, these new composites will continue to suffer from poor cycle life and high manufacturing costs. Until then, the economics of rising energy densities will be severely disadvantaged.

Is there a scientific origin for this limit? There are plenty of reasons, but it is best to illustrate one: the impact of battery voltage on energy density. In a very simplistic description, higher energy density comes from packing more lithium ions inside the battery, as well as raising the voltage at the terminals (if you recall, energy is the product of charge, or ions, and voltage).

The voltage at the terminals is the difference between two voltages; that of the cathode voltage in reference to a fictitious lithium contact, minus the voltage of the anode, also in reference to that same fictitious lithium contact. This is illustrated in the graph below.

On the vertical axis, I show the voltages of both the cathode vs. lithium (top) and the anode vs. lithium (bottom). On the far left of the chart, i.e., when x is approaching zero, the graphite is void of lithium ions and cobalt-oxide is totally full of lithium ions. This is when the battery is "empty." On the far right, when x is approaching one, the opposite is true; the battery is "full." I specifically refrain from saying x = 0 or x = 1. At these extremes, bad things happen. When x = 0, the physical structure of the cobalt-oxide alloy is greatly damaged. This limits the low end of the battery voltage to about 3.0 V. In other words, never discharge your lithium-ion battery below 3.0 V; the risk of irreversible damage is great. Most smartphones actually cut off near 3.3 V (this is really when your phone reads zero percent).

At the opposite end when x = 1, lithium ions combine to deposit (or plate) as a metal.  The anode structure is also under immense mechanical stress. Additionally, when the cathode voltage rises past 4.2 V, the electrolyte begins to oxidize (and ultimately decompose). This effectively limits present-day lithium-ion batteries to a maximum voltage of 4.35 V with the understanding that the "bad stuff" begins to occur past 4.0 V, and becomes unsafe past 4.35 V.

To raise the energy density in the carbon/cobalt-oxide material system, one needs to raise the voltage and/or pack the electrodes as close as possible to each other. Well, raising the voltage past 4.35 V is getting very difficult. Finding electrolytes that can handle such voltages is no small feat. Additionally, the battery is now awfully close to the x = 1 point; in other words, the risk of lithium metal deposition  at the anode is dangerously large at high energy densities. Life is getting tough, and there is very little room left for the battery vendors to maneuver.

These are just a few physical insights behind the challenges that the battery designers and manufacturers are facing. Finding solutions to these challenges via brute-force material development is not the answer. If you find yourself stuck with these limits, talk to us at Qnovo!

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Thursday, November 13, 2014


If you own a mobile device and in need to charge it, the first thing you do is to find a (or your) AC adapter with a USB cable, then plug it into the appropriate USB charging port on your voila! Come back some hours later and your battery should be full.

It is simple, as it should be. But have you ever wondered what happens behind the curtains? I will cover some of these details in this and additional future posts.

So first, before we delve into the electronic circuitry responsible for charging the battery, let us examine the electrical characteristics of the lithium-ion battery during charging. The battery is a complex chemical device, but electrically, it can be simplified into a two-terminal component; in other words, there are two electrical values of importance to us: i) the voltage across the battery terminals, and ii) the current flow, either into (i.e. during charge) or out (i.e. during discharge) of the battery.

The voltage across the terminals of the battery are directly correlated to the state-of-charge (SoC) of the battery -- if you recall from this earlier post, it is the fraction of the battery charge relative to full. 

During the charging phase, one would expect the voltage to rise across the terminals of the battery from the "empty" level (typically around 3.3 V) up to the "full" level (typically around 4.2 V or 4.35 V depending on the type of battery). This is precisely what the next chart illustrates for a lithium-ion battery with a nominal charge capacity of 720 mAh.

This chart of the battery's charging characteristics looks rather busy but we can dissect it quite readily to glean some valuable information. Every lithium-ion battery, without exception, will have a similar chart, often included in its data sheet. 

The right vertical axis shows the charge capacity (or SoC) as a function of charge time. It is shown in the long-dash curve. Zero is at zero, and 100% is reached after about 2.5 hours of charging.  The charging current itself is represented by the dotted line, and its values are on the far left vertical axis. 

One can make a few key observations. First, the charging current has a steady value of approximately 720 mA, then begins to decay after less than one hour of charging. This first phase is called the constant-current (CC) phase; the second phase where the current is steadily decaying is called the constant-voltage (CV) phase. Some publications and blogs incorrectly label them as the "fast charging" phase and the "trickle charging" phase...this is absolute non-sense and illustrates total ignorance on the part of the writer. I will revisit this type of CCCV charging later -- it is at the core of many ills that plague modern lithium-ion batteries.

The second observation we make is that the voltage of the battery indeed stays between 3.3 V and 4.2 V, but that somewhere around 50 minutes, the voltage is held steady at 4.2 V and remains there. This is precisely what the constant-voltage phase does; the internal charging circuitry will actually pin the charging voltage to a value of 4.2 V and keep it there until the charging is complete. 

This maximum voltage value comes straight from the chemistry. At higher values, the electrolyte inside the battery begins to oxidize and decompose, thus posing a serious safety hazard. This is one of several reasons why an end user cannot, and should not, mix chargers (AC adapters) used for NiMH batteries and lithium-ion batteries. The voltages for each battery type are vastly different.

Finally, one wonders at what point is the charging process deemed complete? Naturally, you will say "100%", but how is 100% defined? During the charging process, the convention is to halt charging when the decaying current reaches 1/20th of the capacity of the cell. In this particular battery, it corresponds to the current decaying to 720/20 = 36 mA. From the chart above, this is reached after 2.5 hours. But mobile device manufacturers are in a hurry and often fudge their numbers, so that's why you will see the green light turn on much, much, earlier, shaving 30 or 45 minutes from the actual charging time.

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Tuesday, November 11, 2014

31. A PEEK INSIDE THE BATTERY OF A TESLA MODEL S                           

A single lithium-ion 18650 cell is relatively small in size and in capacity. So how does Tesla pack 85,000 W.h in the battery pack of the Tesla Model S? The answer is very carefully.

The battery pack in a Tesla S is a very sophisticated assembly of several thousands of small individual 18650 cells connected electrically in a series* and parallel combination. A colleague alerted me recently to an outstanding teardown activity that an owner of a Model S is performing on his battery pack. This offers a great peek into how Tesla designed this battery pack.

First, it is very important to note that 85 kWh is a huge amount of energy. The voltages at the terminals of the pack are high, and the currents can also be dangerously high. In other words, safety is of paramount importance in the design, assembly, and equally the teardown of a large battery pack of this size.

The photographs that this owner has published are very telling and provide a great insight into the design of this pack. The first photograph shows the entire pack with its top cover removed.

Photograph of the battery pack for a Tesla Model S electric vehicle. Courtesy: Tesla Motors club user [wk057]. 

This photograph shows a total of 16 sections, or modules, electrically connected together. A closer inspection of one individual module shows that it contains a number of 18650 cells, all sitting next to each other in a vertical position. One can diligently count a total of 432 individual cells in one single module. 

Therefore, the first observation we can make is that there are a total of 16 x 432 cells = 6,912 cells. The capacity of each cell is 85,000 / 6,912 = 12.29 W.h., or equivalently, 3.4 A.h. The individual cells are of the 18650 type, manufactured by Panasonic. They use an anode made of graphite, and the cathode is made of NCA (nickel-cobalt-aluminum alloy). The NCA-graphite architecture has a lower nominal voltage than the cobalt-oxide alloy commonly used in mobile devices. The nominal voltage of a NCA-based cell is 3.6V.  The owner measured the module voltage to be 19.63V when the battery was virtually dead. A dead (i.e., empty) cell has a voltage near 3.2V. 

Photograph of one individual module from the pack.
Therefore, our second observation is that each module contains 19.63V/3.2V = 6 cells in series. Consequently, the module is configured as 72 parallel legs, each containing 6 cells in series (abbreviated as 6s x 72p).

The energy of one single module is 85,000 / 16 = 5,312 W.h. This is equivalent to the energy contained in about 100 (yes, one hundred) laptop PCs. A closer examination of the module assembly shows that each cell is wired to the main bus (the primary electrical path) through little fuses...this is an outstanding safety feature that will disconnect an individual cell that may have shorted with time.

Photograph showing the fuse wires connecting individual 18650 cells to the main bus.
Our third observation is that the entire pack consists of 16 modules connected in series, therefore the overall architecture is 96s x 72p. The stack voltage is nominally 96 x 3.6 = 345 V, but would be as low as 310V when the pack is nearly empty, and 403 V if the battery is at 100% full (but Tesla does not recommend that you charge the battery to 100%).

Our fourth observation is about weight. Panasonic specifies a weight of 46 g for each 18650. The weight of all 6,912 cells comes out to be 318 kg or about 700 lbs. The weight of the entire battery pack is estimated by various sources to be 1,323 lb. So the 18650 account for approximately 53% of the weight of the pack -- the rest is due to electronics, cooling systems, wiring and safety.

Judging from my earlier post on cost trends, the estimated cost is about $1.50 for each 18650 cell. I am assuming that the Tesla sourcing team is very influential in demanding attractive pricing from the cell manufacturer, Panasonic. This equates to a cost of approximately $10,000 for the cells used in the pack. Given a delivery rate of about 35,000 cars for 2014, that equates to nearly $350 million that Panasonic will collect this year from selling cells to Tesla...and the number may grow to $1B in 2015.

Adding another estimated $5,000 for the cost of the electronic battery management systems, and one has a preliminary material (BOM) cost of $15,000 for a pack used in the Tesla Model S. That equates to less than $200 for each kWh of stored energy. It also works out to about $50 of battery cost for each mile of driving range. It's amazing what one can derive from a handful of photographs!

* A series configuration means the positive terminal of the first cell is connected to the negative terminal of the second cell. The voltage at the free terminals is now the sum of the voltages at each cell. The series combination allows raising the voltage of the battery pack to much higher voltages.

† A parallel configuration means electrically connecting the positive terminal of the first cell to the positive terminal of the other, and the negative terminal of the first gets wired to the negative terminal of the other cell. The voltage at the terminal of one cell is identical to the voltage at the terminal of its sister cell. A parallel configuration allows the addition of capacity without raising the voltage.

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Saturday, November 8, 2014

30. COST, COST AND COST.                                                            

The cost of today's lithium ion battery constitutes a very large fraction of the total cost of electric vehicles, and a very visible portion of the total BOM (bill of materials) cost for a mobile device. It is estimated that the battery pack for a Chevy Volt costs nearly $10,000. That's nearly 25% of the total price of the vehicle. It's no wonder that the CEO of Fiat, Sergio Marchionne, requested earlier this year that customers stop buying his electric version of the Fiat 500 because he is tired of losing $14,000 on each vehicle. 

The small battery of a mobile high-end smartphone from the top mobile device manufacturers can cost upwards of $3. Batteries made in China will cost less but often come with some serious performance drawbacks. Given that a high-end smartphone has a BOM near $200, that makes the battery cost a visible line item on the list of components. 

Naturally, this attracts attention, scrutiny and further examination of where the cost curve has been, and where it is headed. The next chart shows the pricing trends that the industry has been subjected to. As deployment of batteries accelerated driven by demand in both the mobile and EV sectors, the cost of the batteries, at least for consumer devices, measured in dollars per unit of energy (Watt-hours), plummeted nearly 10X in the past 20 years. The forecast is for further decline especially as more production capacity comes on line in China. The cost curve for electric vehicles remains more recent with a modest deployment of capacity in new electric vehicles such as the Tesla Model S, Chevy Volt, Ford Focus Electric and the Nissan Leaf.

Cost curve for lithium-ion batteries. Source: Bloomberg New Energy Finance, Qnovo

Let's further examine the curve corresponding to consumer batteries, driven primarily by mobile devices and laptop PCs.  Present cost for consumer batteries hover near $0.20-$0.30 per Wh. Higher performance batteries can command substantially more. The annual production and consumption capacity for this market segment is approximately 20,000 MWh. Judging from the historic trend, it may be another 3 or 4 years before the 100,000 MWh threshold is reached. In other words, it may be another 3 or 4 years before we see these cost figures break the $0.10 per Wh mark. That is of course assuming the trend continues at its present pace. Any excess production capacity in Asia, and particularly in China, could easily accelerate the price decline.

Paralleling this pricing pressure is the continued trend to improve the energy density of batteries. Over the past 20 years, the energy density has increased by approximately 2X. For the most part, the progress was incremental translating to an annual gain of about 5%. A state-of-the-art battery in a high-end mobile device may have a density above 600 Wh/l. But as energy densities continue to climb, both development and manufacturing costs are accelerating. The present material system using graphite based anodes and cobalt-oxide cathodes is reaching its limit. Research in new materials is needed. These enormous research and development costs accompanied with the large capital costs of manufacturing are reducing the field of battery vendors to a select few, notably the large chemical conglomerates based in Asia.

So will the past trend of increasing energy density and decreasing costs continue indefinitely? The challenges of the past couple of years indicate that a change in this trend may be quite likely. If the rising costs of developing and manufacturing new high-energy density batteries are not tamed, then expect a slowdown in the rise of energy density, and commoditization and rapid decline in pricing for lithium-ion batteries. That, folks, is the definition of an inflection point!

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Wednesday, November 5, 2014

29. THE DIFFERENT SHAPES OF A BATTERY...                                      

That is of a rechargeable lithium-ion battery, of course....We all know that lead-acid batteries, the type you have under your hood, tend to be of a standard size, but lithium-ion batteries can come in a multitude of packaging and shapes.

One of the most common misconceptions is that polymer batteries are different. In fact, they are one of the common types of lithium-ion batteries, assembled and packaged in a flat, pouch-like shape.  Their core design is based on the standard lithium-ion chemistry. They are called "polymer" batteries because they tend to use an electrolyte that is gel-like than liquid-like. The outer package is a thin foil that holds the internal structure together. Consequently, they can be prone to damage or puncture, and are often if not always embedded within the mobile device for mechanical protection.

One of the advantages of polymer batteries is that they can be manufactured in nearly arbitrary custom dimensions or shapes. This ability to make the battery fit the mobile device (instead of the other way around) gave polymer batteries their great appeal. Polymer batteries can also be made very thin. The photograph shows a polymer cell made by Sony for use in their Xperia Z2 smartphone. It is only about 4 mm thick. The downside of polymer batteries is the lack of standardization, and consequently, higher cost of polymer batteries; each battery model has to be designed and shaped to the particular dimensions required by the manufacturer of the mobile device. A polymer battery can be nearly twice more expensive (for the same amount of stored energy) relative to their older sibling, the standard 18650 battery cell.

Three different types of rechargeable lithium-ion batteries. From left to right: Prismatic (used in a Samsung Galaxy S5), Polymer (used in a Sony Xperia Z2), and an 18650.

The 18650 cell was named with very little creativity. It comes as a standard cylinder with 18mm in diameter, and 65mm in height, hence the naming. The standard size of these cells made them immensely ubiquitous and inexpensive in the past decade. They were widely used in laptop computers but proved less practical for smartphones with thin profiles. Tesla Motors took advantage of the large-scale manufacturing and low cost of 18650s, and adopted them for use in their electric vehicles. The battery pack in a Tesla Model S contains nearly 7,000 such cells. The photograph above shows an 18650 cell with a capacity of 3,400 mAh made by Panasonic; it is similar to the one used in a Tesla vehicle. The other major manufacturers of electric vehicles have elected to use large size polymer-type batteries. Nonetheless, 18650s are here to stay. There is so much manufacturing oversupply of 18650s that their price continues to plummet, making them an attractive commodity.

The third type of cells are called prismatic. They are, at their core, very similar to the polymer cell but are packaged inside a solid case or can, typically made of an aluminum alloy. This offers added mechanical protection and the requisite safety. Mobile devices that offer replaceable batteries use prismatic cells. The photograph above shows a prismatic cell used in the Samsung Galaxy S5. Owing to the walls of the external can, they tend to be thicker than polymer batteries. 

Back to the photograph above, the keen reader might ask about the connector attached to the Sony polymer battery. It is indeed an electrical connector made using a thin flexible cable. At the tip of this cable, one can observe some circuitry that provides the necessary electronic protection for the battery. In particular, this circuitry ensures that the battery does not experience excessive voltages or excessive currents. A built-in fuse disconnects the battery should it get exposed to adverse conditions. Similar circuitry is also embedded inside the case of a prismatic cell. However, the 18650 cell is bare, i.e., does not include any such protection circuitry which must be included in an external battery management system before the battery is put to use.

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Tuesday, November 4, 2014

28. HOW DOES A FUEL GAUGE WORK ?                                         

There is a little icon on a top corner of your mobile device that reads a percentage value corresponding to the fraction that your battery is full -- this fraction is known as state of charge (SoC). The little charge measurement instrument is embedded in the mobile device and is called a battery fuel gauge. Have you ever wondered about how it works?

Modern electronic fuel gauges were first commercialized by Benchmarq in the 1990s, initially for laptop PCs. The company was subsequently acquired by Texas Instruments. You can recognize that line of products with their bq prefix. Several other semiconductor companies offer similar products, for example, Maxim Integrated, Seiko Instruments, and more recently, Qualcomm who integrates their fuel gauge directly into their power management chip (PMIC) for mobile devices.

The basic principle of measuring state of charge is rather old. It has been known in science that the chemical potential is a direct function of the state of charge. The chemical potential in a rechargeable battery is the voltage measured at the terminals of the battery -- now here's an important qualifier -- at equilibrium. In simple words, one has to let the battery sit for a long duration of time to reach equilibrium, then make the voltage measurement. This voltage is then a direct measure of the SoC. For a particular chemistry or type of battery, say a lithium-ion rechargeable battery, this relationship is universal. In other words, it applies for every battery that is made of the same chemistry. That's quite handy because we don't need to reinvent a new fuel gauge for every mobile device. The graph below shows this relationship voltage vs. SoC relationship for a rechargeable lithium-ion battery using a carbon anode and a cobalt-oxide cathode. When the battery is fully charged (shown by 100% on the horizontal axis), its voltage is maximal at 4.35 Volts. As charge is removed from the battery, its voltage drops according to the relationship identified by the red curve. This relationship is known in technical terms as the open-circuit voltage (OCV) function. Notice that capacity (in units of mAh) does not enter this relationship. 

Naturally, the next question is "How do we measure the SoC if the battery is not in equilibrium?" A chemical system, of which a battery is a prime example, is not in equilibrium when there is current flowing through the battery, for example, it is already powering your mobile device in operation. In such a common scenario, the actual voltage at the terminals of the battery is a little lower than what you would measure in equilibrium. That's because every battery has a little internal resistance to it. So when there is current flow, the voltage is now lower by a value equal to the product of the current times the resistance (if you recall Ohm's law from your high school science class). This is illustrated in the chart above with the blue dashed curve. In principle, one can correct for this offset: measure the value of the internal resistance, multiply it by the measured current, then add it to the measured terminal voltage to obtain an estimate of the equilibrium voltage. This is precisely how most middle-of-the-range fuel gauges operate.

Alternatively, one can wait for durations of time when the mobile device is in sleep mode (say in the middle of the night), then assume that it is close enough to the equilibrium voltage. It may take tens of minutes for a battery to reach this state of equilibrium; so waiting for long durations to make a SoC measurement is not terribly practical.

But the method of correcting for the IR offset creates inherent errors. That's because the internal resistance of the battery fluctuates with current, temperature, and age which are difficult to correct for or characterize in real life. This error can reach ten percentage points or even more. Such an error may be inconsequential when the battery is full, but is quite detrimental when the mobile device is near empty: an end user would be shocked if he or she thought there was 10% of the battery charge left when in actuality the battery was really at zero! If you have experienced such a case, then you know your fuel gauge is not terribly accurate.

Higher-grade fuel gauges supplement the OCV measurement technique with another instrument called coulomb counter. This is a fancy name for an electronic function in the fuel gauge chip that measures current with great precision, then multiplies it with a precisely measured time stamp. The product of the two is electric charge, measured in its unit of Coulomb. In other words, the coulomb counter is counting charge flowing through it (or counting electrons if it were really, really, very precise). This function becomes very useful when the battery is actually powering a mobile device, i.e., the battery is not in equilibrium. If one can start by measuring the SoC of the battery when it is in equilibrium, then any charge lost or added when the battery is not in equilibrium is measured using the coulomb counter. The combination of these two methods allow a more precise measurement of the SoC, often reaching 1%.

Now let's get to an exciting and practical part of the fuel gauge: If you have ever walked into an AT&T or Verizon or Apple store complaining about your battery, and you were told to drain the battery to zero and then charge it back to 100% (search the web for this, and you will find lots of such stories), it is because of your fuel gauge. Let me repeat this clearly: It has nothing to do with the lithium-ion battery or any hints of memory in the battery. Lithium-ion batteries have no memory effects. All you are doing in this charge-discharge exercise is simply letting the fuel gauge recalibrate itself. You see, if you use the battery continuously in the middle range, say never reach zero and seldom reach 100%, the fuel gauge will lose track of what is actually zero and what is actually 100%. In such cases, the readings become flawed, and the fuel gauge gets confused. So a full charge-discharge cycle helps reorient the fuel gauge.

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp

Monday, November 3, 2014

27. STANDARD GAMES BATTERY VENDORS PLAY.                             

By now you are familiar with the limitations of a rechargeable lithium-ion battery. There are three parameters that I have covered so far that describe the general performance of a lithium-ion battery. They are energy density (which dictates capacity), charge rate (which dictates charge times), and cycle life (which dictates warranty).

Without the use of more sophisticated battery management algorithms, one can achieve excellent performance on two of these parameters axes, but not three. So battery vendors and device manufacturers often resort to compromises that are rapidly becoming quite limiting. Let's review some of these design compromises:

1) Sacrifice cycle life:

This has been the most common of these compromises, primarily because carriers and operators did not historically specify or enforce an actual figure for cycle life. It was commonly understood that 500 cycles was sufficient but Verizon Wireless moved the goal post to 800 cycles which made life far more challenging for the battery vendors.

So for these newest crops of smartphones with over 3,000 mAh in capacity (and often pushing a thin profile which drives the energy density to or above 600 Wh/l), device manufacturers are trying to get by with 500 cycles. But what if you need to ship to Verizon and meet their 800-cycle specification, what other compromise do you implement? As a consumer, look for the fine print on your product warranty. If it says that the battery warranty is limited to one (1) year, then you have every reason to suspect that your battery will not get much past 500 cycles.

2) Sacrifice charge time:

That's right. If you can't make 800 cycles, then drop the charge rate, or increase the charge time. This can be painful and is an old trick in the book to increase cycle life. But slower charge times irritate end users and this trick is beginning to lose steam. How slow will the charge time be? How about pushing 3 hours or more?

3) Sacrifice energy density:

Precisely! Drop the energy density and lose capacity, and that buys the device maker and battery vendor some extra room. How low should one drop it? To increase the charge rate from 0.3-0.4C to about 0.7C, the energy density drops to about 550 Wh/l, and to increase the charge rate to 1C or above, the energy density has to drop to 500 Wh/l or even lower....that means you can only fit less than 2,500 mAh in the same volume where you originally were able to fit 3,000 mAh. That's not progress! For a discussion on C-rates, read this previous post. So what's the penalty for dropping energy density?

4) Sacrifice depth of charge:

Depth of what? Yes, this is the first time I introduce this new term and parameter. It is not typically used in mobile devices, but it is very prevalent in automotive electric vehicles. When one talks about a certain capacity, say 3,000 mAh and a corresponding energy density, it is implicitly assumed that the battery will be fully charged, i.e., the voltage of a single battery cell will rise to its highest safest limit, most commonly 4.35 V (but in some cases, it can be only 4.2 V). This is the true definition of 100% full. But what if we fill the battery up to say only 80%, which will correspond to a lower battery voltage (by about 0.1 or 0.15V) from the maximum allowed voltage? In other words, the true energy density is actually only 80% of its nominal density. This is called depth of discharge. To use an analogy, just because your bucket can hold 10 gallons of water, it does not mean you actually have 10 gallons of water. If your bucket is only 80% of full (your depth of discharge), then you only have 8 gallons to use. I will talk more in future posts about the impact of depth of discharge on cycle life and battery performance.

© Qnovo, Inc. 2014 / @QNOVOcorp @nadimmaluf #QNOVOCorp