Visions of Commercial Transportation: The Future of Commercial Auto Insurance Is Here

By Russ Banham

Carrier Management

For several years now, the commercial auto segment has been among the property/casualty insurance industry’s worst performing lines. Wounded by a sharp uptick in claims frequency and severity and helped little by inadequate premium increases, average direct loss ratios and combined ratio hover around 66 and 110, respectively.

The market’s distress is attributed to a variety of higher loss exposures, including increasingly distracted and fatigued truck drivers and the high cost of repairing vehicles embedded with expensive collision avoidance cameras, sensors and telematics devices. The good news is that these same technologies are expected to positively alter the status quo, positioning commercial auto for a market rebound within the next five to 10 years.

Transportation experts project that semi-autonomous and fully autonomous electric trucks using crash avoidance sensors, cameras, telematics and deep learning forms of artificial intelligence will guide the way toward unparalleled safety. While semi-autonomous trucks equipped with lane correction and automatic braking systems are already on the road, fully autonomous/driverless vehicles are being tested across the country and the world.

Over the next decade, these trucks will gradually make their way into the nation’s transportation network, initially for long-haul delivery of goods on highways. Tests currently are being conducted in so-called “platooning,” which involves a convoy of trucks lined up in a row on a dedicated highway lane set aside specifically for their use. Only the first truck in the queue is driven by a human being; the remainder use automated driver support systems to maintain a specific distance behind the leader, accelerating and braking as dictated.

Since highway accidents result in the most severe commercial auto insurance claims (due to a greater risk of multiple fatalities), a future in which autonomous trucks travel along a dedicated lane bodes well for commercial fleet operators, their insurers and the driving public at large. In 2017, truck-related fatalitiesreached their highest level in 29 years, up 9 percent to 4,761 deaths. (Source: “Trucking Fatalities Reach Highest Level in 29 Years,”, Oct. 4, 2018, reporting on the U.S. Department of Transportation’s National Highway Traffic Safety Administration analysis, “2017 Fatal Motor Vehicle Crashes: Overview,” page 3)

Accident frequency rates are a different matter. In our Amazon-fueled era of ecommerce, more trucks and delivery vans are congesting shorter-distance routes, particularly in the so-called “last mile” of transport.

“Accidents are increasing because of crowded roadways and streets, driver inexperience, and the rush to deliver goods in a certain time frame,” said Steve Viscelli, senior fellow at the University of Pennsylvania’s Kleinman Center for Energy Policy. “What used to be an experienced UPS driver delivering a package here and there is now 20 or 30 part-time people in vans delivering dozens of small packages in very short time frames.” He’s referring to Amazon Flex and Uber Freight Plus—applications that link drivers with shippers to deliver smaller packaged goods.

These drivers are subject to both traditional collisions and non-accident-related injuries. “There’s been an increase in basic ‘slips and falls’ from drivers bringing bulky packages into a residence,” Viscelli said. “Drivers also double park and fail to use proper techniques for removing heavy packages, hurting their backs. Of course, all that congestion and the need to make so many deliveries in a certain time frame also results in a greater number of collisions.”

Despite higher claims activity, technology is seen as an eventual enabler for safer practices, reducing both accident frequency and severity. Telematics drawing data from onboard cameras and other sensors will alert fleet safety managers in real time to road hazards, weather conditions, vehicle malfunctions, traffic congestion and optimal alternate routes.

As insurers and commercial fleets—large semi tractor-trailers, intermediate-sized trucks, and smaller pickups and vans—begin to share their respective data, expectations are for truly significant decreases in accident-related claims. “In five to 10 years, we will see a period of fast-maturing safety,” said Randy Mancini, vice president of commercial lines at regional insurer Penn National. “Beyond that we will see a transportation system that looks little like the one we’ve long known.”

Handing Off the Baton

This future is already beginning to materialize, evident in subtle changes in the intermodal delivery of raw materials and packaged goods. “Historically, the system has relied on rail, semi tractor-trailers, and smaller trucks and vans,” said Viscelli, author of the book, “The Big Rig: Trucking and the Decline of the American Dream.” “In the expanding ‘I want it now’ ecommerce environment of the future, rail will take on a larger role transporting goods from shipping ports to large warehouse-like depots alongside major highways.”

Fully autonomous trucks will then pick up these goods for transport over long distances in a dedicated highway lane to another depot, Viscelli added. “From there, a semi-autonomous, intermediate-sized truck will transport the stuff on shorter distances to an Amazon or other fulfillment center,” he said. “Smaller trucks and vans driven by for-hire drivers will then handle the last mile of delivery to homes and businesses, as they do now.”

This scenario, assuming it comes to fruition, will result in fewer accidents on highways. “If lawmakers and regulators set aside a designated lane exclusively for self-driving trucks, the chance of a passenger car cutting off a truck is vastly reduced—to the benefit of commercial fleets, drivers and insurers,” said Frank Palmer, senior expert in McKinsey & Company’s insurance practice.

Frank Netcoh, head of middle-market auto for The Hartford, shares this perspective. “If you have a fully autonomous truck on a big stretch of highway designated specifically for these vehicles, the platooning concept would reduce the possibility of making a mistake,” Netcoh said. “It’s hard to get into a collision when no ‘driver’ is screwing up.”

What about the safety of shorter-duration transport vehicles driven predominantly by people?

The experts are optimistic that safety enhancements warning drivers about impending hazards and correcting vehicles to reduce the risk of rear-end collisions, unsafe lane departures and rollovers (caused by the unequal distribution of weight inside a truck) will also increase safety and reduce accident frequency.

“These crash avoidance technologies are already in use today, but the real ‘game changer’ in my mind is telematics—the movement of information from onboard cameras and sensors to fleet safety managers,” said Brian Fielkow, CEO of JETCO Delivery, a logistics business that specializes in regional trucking, heavy haul and national freight.

Depending on the telematics device, a two-way camera films both driver behaviors and the road ahead. If a truck is in an accident, the data can indicate what the driver was doing just prior to the collision—such as reading a text, eating or drinking, or blankly looking ahead with shoulders slumping and eyelids shuttering from fatigue (the camera captures a driver’s body position, in addition to his or her face). This information can then be used for evidence-based training purposes. “It’s like watching game films—a learning tool,” said Fielkow. “Even the best driver isn’t a perfect driver.”

Front-looking cameras, on the other hand, can discern the different factors leading up to an accident or a near-miss. “A trigger event like a hard brake will inform the telematics to send video depicting the road scene a few seconds prior,” Fielkow explained.

In accidents in which a truck rear-ends a vehicle, the video can be the difference between a very expensive claim and no claim at all. “Traditional investigations of an accident almost always attribute the cause of the collision to the vehicle in the rear,” said Fielkow. “However, the video may indicate that the driver of the rear-ended vehicle was swerving in and out of traffic and actually cut off the truck, altering liability.”

“Regrettably, people who get into an accident with a truck see that big name sprawled across the side and tend to blame the company,” said Jennifer Haroon, Nauto chief operating officer and formerly head of business operations at Waymo (Google’s self-driving car project). “Our video evidence exonerates a lot of drivers.”

Gizmos and Gadgets

Telematics devices are becoming increasingly more sophisticated and less expensive, encouraging wider use by fleet operators. The visual sensors, for instance, are smaller, even though the semiconductors within them are more powerful. Sandeep Pandya, president of Netradyne, another maker of computer vision cameras, attributes the enhancements to ongoing technological advancements in smartphones.

“We’re in an age of powerful computing capabilities and ubiquitous connectivity via 4G broadband cellular technology,” he said. “Using deep learning, algorithms can automatically draw insights from visual images, accelerometers, gyroscopes and other sensors…Mass quantities of other data like the weather and road conditions also can be analyzed to guide safer driver behaviors.”

That’s uplifting for both truck fleets and their drivers, an employment category where demand far outstrips supply. The American Trucking Association estimatesthat the transportation industry needs to hire nearly 900,000 additional drivers to meet rising demand for service. Few young people want to drive a truck for a living. (The average age of truck drivers is 55, according to some sources; the ATA puts the average age of just over-the-road, for-hire drivers a little lower, at 49.)

“Professional truck drivers drive 50 hours a week and 150,000 miles per year on average; they take great pride in what they do,” said Pandya. “Almost the entirety of their work is uneventful, but it’s that 2 percent in which they confront hazardous conditions that define them. Following an accident, drivers feel vilified, making it hard for the industry to retain them. Telematics is a way to determine the best of the best for retention purposes.”

Netradyne’s cameras, he explained, will ferret out positive driving behaviors as well as negative ones. “You’re now able to know which drivers are the most safety conscious,” said Pandya. “This gives a truck company the opportunity to address its driver shortage problems; instead of spending $10,000 to recruit a new driver, it can put it toward a bonus rewarding its best drivers.”

Filming a truck driver’s entire time at the wheel creates privacy issues, hence both Netradyne and Nauto have designed their telematics devices to produce short video streams of the triggering safety event, such as images of what the driver was doing immediately preceding the accident. This information is automatically provided to fleet safety managers to reduce driver distractedness and isolate factors causing drivers to become fatigued.

Interestingly, the introduction of electric trucks like Tesla’s first semi tractor-trailer, Tesla Semi, is expected to reduce driver weariness. “Electric vehicles produce less noise and vibration and are easier to drive, helping drivers remain vigilant for longer periods of time,” explained Chris Nordh, senior director of advanced vehicle technologies at Ryder System Inc., a provider of rental trucks and fleet management systems. “Reports indicate that drivers are not as exhausted at the end of the day.”

Electric trucks also are able to withstand more of the damage caused by a collision, said Nordh’s colleague Amy Wagner, Ryder vice president of global product insurance and risk management. “Crash data indicates that electric vehicles are the safest vehicles on the road in the event of an accident,” said Wagner. “Electric vehicles are built on what is called a ‘skateboard platform,’ where the batteries are built right into the frame. This creates a vehicle with a low center of gravity that is also a massive piece of metal. Not surprisingly, we’re at the forefront of bringing more commercial electric vehicles into our fleet.”

Beginnings of a Long-Term Partnership

Insurers are bullish on these varied technological and intermodal transport developments. A case in point is Penn National, which provides financial grants to truck insureds interested in implementing telematics and other safety equipment. “We’ll split the cost of the technology 50-50 up to $2,500, in return for feedback on what the data indicates about claims reductions,” said Mancini. “We can then learn from this data in terms of our underwriting and pricing.”

Other insurers perceive similar value in having policyholders share their experiential data. “It gives us the opportunity to provide more insightful consultation to fleet safety managers,” said Tony Fenton, vice president of Underwriting and Product for Commercial Auto, Casualty and New Product Development at Nationwide. “It puts our loss control/risk management professionals in more of an ‘I know’ environment.”

Netcoh from The Hartford agrees, noting that the insurer’s focus in 2019 is to more closely engage with its truck insureds to better understand their use of telematics. “We’re looking to dovetail their work with the research conducted by our safety risk engineers to improve loss costs,” he explained. Telematics has yet to evolve into a really close data-sharing relationship.”

That’s true. However, Fenton predicts that within the next five to 10 years, such sharing of data will be business as usual. “I can see a day when insureds provide us information on how many drivers are on the road, who these drivers are and how they are driving, and we provide to them information on the location of the vehicle in relation to the accident history of a particular stretch of highway based on weather conditions, traffic congestion and driver time at the wheel,” he said. “As more data becomes available and is shared, insurers can rate a client based on real-time exposures, as opposed to traditional rating programs driven by the type or class of vehicle.”

Wagner from Ryder concurs, pointing to the opportunity for insurers, truck companies and diverse other parties like repair shops to engage in an ecosystem-like software platform. “When you start bringing all these diverse sources of data together, you can create driver scorecards as a means of positive reinforcement to compel better driving behaviors,” she said. “Those scorecards can then make their way into lower insurance rates.”

Need for New Types of Insurance?

As commercial fleets incorporate more autonomous safety features into their trucks, traditional commercial auto policies may need to accommodate new causes of liability. For instance, a collision may not be the result of driver error but in fact be caused by a sensor failure, software glitch or a cyber attack.

Insurer Chubb is presently mulling this new risk landscape. “We have to be prepared to address these new liabilities,” explained Dave Brown, executive vice president and transportation leader in Chubb’s Major Accounts Division.

There are two schools of thoughts on how insurance policies may need to change. “One is whether or not the current commercial auto liability policy is still the appropriate mechanism, assuming some adjustments,” Brown explained. “The other surrounds the use of product liability policies as an alternative or conjunct to commercial auto.”

While Brown is “on the fence,” he said, regarding which concept is best, he believes the longstanding commercial auto policy has a lot of positives to commend it. “For one thing, it allows for claims to be efficiently administered to rapidly take care of injured victims and make them whole financially, whereas product liability claims are highly litigious—consuming time and costs,” he explained. “If a loss is attributed to a sensor or software defect, my thinking is that the commercial auto policy should be the mechanism for the adjudication and payment of the claim, with this insurer subrogating the product manufacturer’s insurer for the loss.”

Down the line, the experts believe that commercial fleets will inevitably become safer. “The next wave is for all these remarkable cameras and sensors to send their data over the Internet to be analyzed with other types of data like historical loss patterns, real-time weather conditions, traffic congestion and even the Department of Transportation’s data on roadside inspections,” said Fielkow. “Really, the sky’s the limit here. Algorithms can then dig through these massive data volumes to unearth specific areas for continuous safety improvements.”

As this happens, Fielkow can see a day when commercial auto insurers and truck carriers work closely together to help each other become smarter about risk management and loss prevention, he said. That’s good news for both parties and for all of us who regularly venture onto a highway.

Russ Banham is a Pulitzer-nominated business journalist and author of 29 books.

Quake Queller: A New Technology Called the Seismic Muffler Brings New Hope for Reducing Earthquake Damage

By Russ Banham

Leader’s Edge

On Jan. 17, 1994, a 6.7-magnitude earthquake struck the San Fernando Valley region of Los Angeles, killing 72 people, injuring more than 10,000, and causing an estimated $40 billion in widespread property damage. Thousands of homes, buildings and cars were destroyed in what remains one of the costliest catastrophes in U.S. history.

The Northridge Earthquake, named for its apparent epicenter (later determined to be the nearby community of Reseda), stunned seismologists with its ferocity. Catastrophe prediction models had estimated the probability of a 6.7-magnitude earthquake as a one in 500-year event. Now, just 24 years after Northridge, scientists at the U.S. Geological Survey predict a 99.7% of another 6.7-magnitude temblor in Los Angeles within the next 30 years.

Much worse is the possibility of a mammoth earthquake striking coastal residents of the Pacific Northwest along the 620-mile Cascadia Subduction Zone, where the Juan de Fuca ocean plate dips under the North American continental plate. The fault zone encompasses the cities of Seattle and Portland, which confront an 8% to 20% chance of experiencing a magnitude-8.0 or higher quake in the next 50 years.

Such doom and gloom projections are daunting for anyone living along the western coastline of the United States. The risk is also of great economic consequence to the global insurance and reinsurance industries, which absorb the financial brunt of earthquakes along with local, state and federal taxpayers. The Northridge Earthquake alone caused insured losses estimated at $25.6 billion in 2017 dollars, more than the industry had collected in earthquake premiums over the prior 30 years. According to the Federal Emergency Management Agency, the damage losses add up to $4.4 billion annually nationwide. Across the planet, earthquake losses in 2016 alone surpassed $53 billion.

Limiting the Impact

On average, roughly 500,000 detectable earthquakes occur each year, of which 100,000 can be felt and 100 cause significant property damage. For millennia, people living in regions prone to earthquakes have tried to limit the impact of earthquakes on buildings. Pliny the Elder’s history of ancient Greece includes a reference to the use of sheepskin between the ground and the foundation of a temple to permit the structure to slip and slide with less damage during a temblor. This ancient prevention technique is actually a primitive version of base isolation, a current protection technology. In base isolation, spring-like flexible pads are inserted between a building’s foundation in the ground and the building itself to absorb devastating ground motions.

Now another earthquake protection technology has been developed to do something similar, albeit in a way that stretches the bounds of credulity. OK, it blows the mind.

Developed by scientists at the Massachusetts Institute of Technology’s Lincoln Laboratory, it’s called a seismic muffler, at least for the time being. The concept calls for drilling a V-shaped array of boreholes hundreds of feet deep that slope away from the protected asset, such as a building, an airport runway or a power plant. The array of boreholes one to three feet in diameter is similar in shape and dimension to a set of trench walls.

Cased in steel or a comparable composite material to maintain the structural integrity of the underlying soil and rock, the boreholes divert hazardous surface waves generated by an earthquake away from the protected asset. The bottom aperture of the borehole array allows only higher-frequency, lower-energy seismic waves traveling from the depths of the Earth to enter and propagate. By the time this wave energy reaches the ground surface, it dissipates in much the same way the sounds emanating from a car’s combustion engine are softened by an acoustic muffler.

Tabletop exercises by MIT’s Lincoln Lab, using 3-D supercomputing calculations, indicate the V-shaped array of mufflers can decrease the ground-shaking effects of a 7.0-magnitude earthquake to a 5.5-magnitude earthquake and lower. That’s a vast improvement, given the logarithmic Richter magnitude scale. For the purposes of simple math, a magnitude-7.0 quake is 10 times stronger than a magnitude-6.0 quake but is 100 times stronger than a magnitude-5.0 quake.

Will a seismic muffler work in practice as it does in the lab? The answer appears to be yes. A patent has been issued for the technology, and a 20-page scientific research paper on the muffler was peer reviewed and published in the November 2018 Bulletin of the Seismological Society of America. “We have already received licensing interest in the technology,” says Robert Haupt, the Lincoln Lab staff scientist leading the development of the new earthquake protection system.

Too Good to Be True?

“Wow” factor aside, the technology has yet to be tested in the field. However, the boreholes in their V-shaped array (see diagram) should perform as intended, diverting surface energy away from the protected asset. Questions certainly remain, including the effect of these reflected waves on neighboring structures, the impact of drilling thousands of boreholes, and the overall cost compared to existing technologies, such as base isolation, which is limited to new construction. The effectiveness of boreholes may be greater than base isolation, since the total surface wave energy is diverted. But more analysis, including cost analysis, would be needed to determine the relative effectiveness of each approach for a particular property.

Putting aside these answers for the moment, we sent a description of the new technology to several structural engineers, a leading catastrophe risk modeler, two state insurance regulators, two large reinsurers, and the California Earthquake Authority. Collectively, their interest was piqued, but they were guardedly optimistic. As Dave Jones, then California insurance commissioner (his term ended in 2018), puts it, “The question comes down to what is realistic and affordable. While this appears promising, will it prove to be practical and affordable?”

“I read the piece you sent with an open mind, and it seems perfectly plausible to me,” says Keith Porter, a research professor in the Department of Civil, Environmental and Architectural Engineering at the University of Colorado. “That’s not saying I would recommend its use tomorrow, because it is not quite there yet, much less available. But I get the concept. The fact that it is peer reviewed in such a reputable journal gives further credence to its scientific validity and usefulness.” Porter holds a PhD in structural engineering from Stanford University.

Certainly, there are obstacles in the way of deploying the technology, including the need to obtain site access permits to bore thousands of holes. But Haupt believes the benefits of the solution overshadow its impediments.

“As long as you’re able to drill the boreholes at a distance of 300 meters or so from the asset, you can protect all kinds of structures—from a nuclear power plant to an entire neighborhood of residential homes,” he says. “If a community like Beverly Hills wanted to invest in putting this in to protect their homes on an aggregate basis, the destructive ground motion from an earthquake would be significantly reduced.”

This possibility caught the attention of Janiele Maffei, chief mitigation officer at the California Earthquake Authority, a privately funded, publicly managed organization that sells California earthquake insurance policies through participating insurance companies. Maffei, a registered structural engineer, is responsible for directing the authority’s statewide residential earthquake retrofit program.

“That’s a very interesting possibility, since we’ve been looking solely at mitigation on a building-by-building basis,” Maffei says. “The possibility of protecting more than one structure at a time is an exciting thing, particularly in California, which bears two thirds of the nation’s earthquake risks.”

Maffei cautioned that her opinion is tempered by financial reality—that is, the cost of the new technology. Other experts agreed. “This all sounds very interesting and promising, but we need to consider the real-world implications of the technology, chiefly its scalability and cost-effectiveness,” says Erdem Karaca, Swiss Re’s head of catastrophe perils in the Americas. (Not incidentally, Karaca holds a PhD in civil engineering from MIT.) “We’re talking thousands of boreholes drilled hundreds of meters deep to protect a hospital or a power plant. Is this safe? And how much will it cost?”

Haupt says the number of boreholes and their dimensions depend on the application. “Say you wanted to protect a kilometer-long airport runway,” he says. “The depth of the boreholes would be approximately 50 meters, the diameter about one foot, and the number of boreholes around 5,000 on each side of the runway.”

He further estimates it would require about 5,000 boreholes to protect a hospital, about 10,000 for a nuclear power plant, 40,000 for a 10-kilometer-long oil and gas pipeline, and 50,000 to 200,000 for a residential community (depending, of course, on its size). That sure sounds like a lot of boreholes, but Haupt countered that drilling with modern technology is “relatively straightforward.”

But what about the cost of all that drilling, compared to the expense of base isolation? According to various estimates, it costs $2,000 to $3,000 to drill a one-foot-diameter well 300 feet into the ground. And that’s just one borehole. Nevertheless, Haupt maintains the aggregate cost of seismic mufflers is much less than comparable base isolation expenditures.

“To build a tall skyscraper today using base isolation costs tens of millions of dollars per building,” he explains. “Based on general calculations from our extensive 3-D supercomputer computations, we estimate we could protect many more buildings at the same cost. So, yes, it would be cost effective.”

A more rigorous cost-benefit analysis will be available following the lab’s field-testing, Haupt says, noting that the lab is looking to drum up a combination of government and private-sector funding to produce a more comprehensive systems analysis. If the test findings are consistent with previous experiments and current cost estimations, former California insurance commissioner Jones says, the technology “could add a new level of protection for Californians. Anything we can do to reduce the potential for loss of life or property from damaging earthquakes we would support.”

Questions on Efficacy

Not all experts are optimistic about the efficacy of the technology. Robert Muir-Wood, the chief research officer at catastrophe modeling firm RMS, who holds a PhD in Earth sciences from Cambridge University, is dubious on several fronts. “This is interesting, ingenious and certainly a novel idea, but my gut reaction is that it’s science fiction,” Muir-Wood says. “Earthquakes are rich in many frequencies of vibration, and this procedure by design cannot anticipate what these frequencies will be prior to an event. It may muffle some frequencies but not all of them. Consequently, I don’t think it will work.”

Apprised of the criticism, Haupt responds that Muir-Wood is correct about earthquake vibration frequencies, which he equated to the frequencies in a broadband spectrum.

“He’s right that an earthquake does not produce a single tone of vibration but many different tones at once,” Haupt says. “But he may be unaware that our system is a broadband defense. By having multiple boreholes surrounding the protected asset, the surface waves approaching the borehole system are reflected or diverted. Any energy coming from below the earth and into the aperture is dissipated, enabling an indifference to frequency. So there is no frequency dependence.”

Karaca, from Swiss Re, brought up another concern—whether or not the V-shaped array of seismic mufflers might divert the energy of an earthquake toward neighboring structures. “Since the technology is designed to deflect surface waves, which are the most damaging aspect of an earthquake, that energy has to go somewhere,” he says. “My question is: where?”

“That’s an excellent query,” Haupt says. “He’s right that the energy will be diverted. However, the array pattern is designed to promote seismic wave self-interference, diverting the destructive effects of the waves. In other words, we’ve designed it in such a way that neighboring structures would not experience anything greater than the ground shaking already produced by the earthquake.”

Now What?

All in all, the unique earthquake protection technology appears to present a viable alternative to base isolation, although Haupt prefers to call it a “supplement” to current mitigations. The next step, he says, “is to go outside.”

The lab is undoubtedly eager to undertake real-Earth scenario testing, which Haupt believes will confirm the findings of the detailed 3-D supercomputer models demonstrating the technology’s effectiveness. Physical tests to date have involved drilling boreholes into thick blocks of plastic topped with scaled-down structures. To approximate different earthquake magnitudes, the blocks were shaken and the effects measured by an accelerometer. “We’re confident that the supercomputer modeling is accurate, but to really prove this works, we need to scale up the testing and experiment with actual boreholes drilled in the earth at different depths, densities, and so on,” Haupt says.

Like all academic laboratories, budgets are tight when it comes to large-scale tests. Is this something the insurance and reinsurance industries might be interested in funding, given the potential for a long-term return in decreased damage losses?

Maffei, from the California Earthquake Authority, is sanguine about the possibility. “Any technology that would mitigate the impact of an earthquake deserves monetary means for further testing,” she says. “When base isolation was explored in the aftermath of the Northridge Earthquake, it was tested first on a single structure. The results were encouraging, guiding its use in additional buildings. Little by little, it has proven itself.”

Richard Quill, expert risk research analyst at Allianz, shares this perspective. “We insure some of the earthquake risks affecting nuclear power plants and large oil refineries,” says Quill, who analyzes and coordinates the large reinsurer’s response to natural catastrophes. “If this technology is proven to work, it would obviously reduce earthquake damage risks to these facilities. From an insurance perspective, it is all very interesting. We’ve invested in the past in new risk-mitigation technologies, including how to make automobiles safer. But we would need more evidence.”

Russ Banham is a Pulitzer-nominated financial journalist and best-selling author.

How We Can Make Our Power Grids More Stable And Resilient

By Russ Banham


It’s a classic supply-demand imbalance. Global energy demand is forecast to rise over the next 20 years, but fossil fuels — currently the principal source of power generation — cannot meet growing needs.

That’s the upshot of the latest global energy report from the International Energy Agency (IEA). While fossil fuels will remain the primary source of electrical power for the foreseeable future, the increasing use of renewable energy sources like wind and solar needs to complement them.

That’s good news for the planet. These varied wellsprings of electricity, however, put added pressure on the country’s aging electrical grid — a challenge that makes necessary its own set of solutions.

A Massive And Ungainly System

Today’s grid is a massive feat of human engineering. Composed of more than 3,200 electric distribution outlets, nearly 10,000 generating units and roughly 300,000 miles of high-voltage transmission lines, it takes the shape of several large, interconnected systems spanning the continental 48 states.

The grid, originally built in the 1890s, has long been a work in progress. Initially, it consisted of more than 4,000 individual electric utilities operating in isolation. After World War II, these utilities began to connect their respective transmission systems. As new technological advancements emerged in each subsequent decade, they were effectively bolted onto this creaking foundation.

The expansion held, but unfortunately, the grid’s patchwork quality makes it more vulnerable to the risk of power outages. When one part of the connected grid needs repair, everyone is at risk of losing electricity. Outdated engineering, antiquated transmission lines and alarming increases in catastrophic weather events and cyberattacks also contribute to an increasing number of outages. When the grid buckles, vulnerabilities emerge — for businesses, for consumers and for the security and economic stability of the country itself.

Several innovative efforts are underway to modernize, strengthen and stabilize the grid. A case in point is the ongoing development of smart grids, using digital communications technologies, such as smart meters and smart appliances, along with other energy-efficient resources. Other promising advances include the deployment of microgrids and autonomous power plants.

Such ingenious approaches to traditional energy generation are direly needed, given the complexity and instability of the current grid, the emergence of increasingly diverse sources of electricity and the rising demand for power worldwide.

Hungry For Power

The IEA estimates that electricity demand worldwide will increase 60 percent by 2040, with developing countries accounting for much of the demand. By 2040, the IEA report states, the Asia-Pacific region itself will consume 46 percent of the world’s energy. In more developed countries, the IEA expects demand to increase about 30 percent through 2040, due in part to the rising use of digital and data technologies by businesses. Electricity markets are particularly vulnerable to the demand brought on by technological change, such as the increasing digitization of the economy and the rise of electric vehicles (EVs).

Despite growing global concern over the climate-changing effects of greenhouse gas emissions, coal is expected to remain the world’s largest source of electricity. However, projections indicate that coal will only generate 25 percent of the world’s total power supply — a 38 percent decrease in share. Renewable sources of energy that aren’t depleted by use, such as water, wind and solar power, will make up the difference.

Integrating renewables with traditional fossil fuel energy sources is crucial to optimizing power plant processes. But the inherent variability of wind and solar energy production is a challenge. Winter typically brings strong winds, while sunlight is at its height in the summer. For now, not all excess capacity can be put to use. Grid operators continue to seek ways to balance a variable supply with fluctuating demands.New concepts and approaches from grid operators are key to ensuring a consistent and steady balance of electricity sources and supply.

Stabilizing The Grid

Microgrids manage to address current grid constraints by circumventing them altogether. When outages require repairs to the larger central system, secondary grids can operate on their own using local energy generation. This method not only provides a backup source of power when the main supply falters, but also empowers small communities to boost their energy independence and avoid a reliance on unmanageable scale. Smart grids, such as the one MHI built and successfully tested in Japan in 2011, represent another encouraging development. In 2017, MHPS launched MHPS-Tomoni, a revolutionary digital solutions platform that discovers ways to optimize power plant operations.

MHPS-Tomoni focuses on providing complete energy solutions in partnership with customers (“tomoni” means “together with” in Japanese). The platform comprises diverse applications that harness and analyze big data to enhance plant performance, improve electricity availability, lower costs and benefit the environment.

Thousands of sensors capture massive volumes of power plant data every second, and MHPS-Tomoni leverages this data to provide insights to operators towards maximizing overall plant performance.

“The initiative is an important step as we seek to build the cognitive power plant of the future,” said Kenji Ando, MHPS president and CEO.

As these examples illustrate, industry leaders are taking key steps to gird the grid for the anticipated rise in global power demand and the move toward a wider energy source array.

Such developments will come as welcome news to the close to two billion people worldwide who still lack reliable sources of electricity. For the rest of us, they’ll streamline access and output, doing so with less waste and a smaller impact on the planet.

Russ Banham is a Pulitzer-nominated financial journalist and best-selling author.

Doubling Down On ESG: Proxy Season With A Social Conscience

By Russ Banham

Corporate Board member

As the 2019 proxy season commences, shareholder proposals continue to be ripped from the headlines. Board diversity, climate change, the opioid epidemic and gun violence will top the agenda in corporate boardrooms in 2019—with directors called upon to ensure that sophisticated policies are in place to address them.

Given the intense media attention surrounding all-male boards, mass shootings and severe hurricanes, floods and wildfires, such shareholder activism is not terribly surprising. What is unusual is the push by large investors to effect ESG-related changes by companies well before the annual meeting.

“The uptick in engagement is encouraging companies to take actions that previously were accomplished via the more antagonistic shareholder proposal process,” says Courteney Keatinge, director of ESG research at proxy advisory firm Glass Lewis & Co. “The increase in dialogue appears to be diminishing the number of shareholder proposals that go to vote each year.”

Investors have made their positions on ESG clear over the past two years—such issues are actually viewed as material to a company’s long-term financial performance, depending on the entity type.

“Investors are looking at ESG in a more nuanced manner,” Keatinge says. “ESG can have this ‘warm and fuzzy’ connotation—like planting trees and promoting nonviolence. But, these are not ‘feel good’ issues, they’re legitimate operational issues.”

In other words, something that is good for society can also be good for the bottom line. “This is not the tail wagging the dog,” says Anne Sheehan, former director of corporate governance at CalSTRS (California State Teachers’ Retirement System) and a senior advisor at global advisory-focused investment bank PJT Partners. “Companies should have a firm grasp of what risks they need to assess and the investments they need to make in response to our changing climate or other environmental and sustainability risks… there is business value in these decisions as much as social value.”

Clout Matters

That sentiment is reflective of the position on ESG taken by large institutional investors like BlackRock, Vanguard and State Street Global Advisors (SSGA). Together, the three firms are the largest shareholder in 40 percent of all publicly listed firms in the U.S. and 88 percent of the S&P 500.

Over the past few years, the firms have been proactively posting their voting decisions online prior to annual meetings, giving individual investors the opportunity to see where they stand.

“Previously, if shareholders wanted to effect change, they submitted resolutions that went to vote,” says Sheila Hooda, CEO of Alpha Advisory Partners and a member of two boards (public company Virtus Investment Partners and Fortune 300 insurer Mutual of Omaha). “Now, with increased institutional engagement, companies are encouraged to take [early] actions that would have been handled through the shareholder proposal process in the past.”

BlackRock CEO Larry Fink made headlines when his annual letter to CEOs suggested ESG would factor into the $6.3 trillion investment firm’s decisions with statements like, “A company’s ability to manage environmental, social, and governance matters demonstrates the leadership and good governance that is so essential to sustainable growth, which is why we are increasingly integrating these issues into our investment process.”

This position has been proliferating across the investment universe, with Vanguard, State Street and other large investors helping to make ESG the “new normal” in shareholder demands. Among these is SSGA, the world’s third-largest asset manager, with nearly $2.8 trillion in assets under management. Matthew DiGuiseppe, vice president and head of Americas on SSGA’s Asset Stewardship Team, acknowledges that board discussions and evaluations have shifted toward corporate social responsibility, “thinking effectively about the impact of a company’s culture on long-term financial performance and strategy,” he says.

Keatinge agrees. “Board member must undertake a materiality assessment of what I call the ‘non-financial metrics’ to identify issues of material importance to a company’s long-term financial performance,” she said. “In doing these assessments, they may find that climate change or gun violence or the opioid crisis—depending on their business profile—are, in fact, material. If this is the case, board members must take pains to do something about it.”

Bye-Bye, Old Boys Network

While gun violence and opioids affect the performance of only a few companies, the gender diversity of boards affects most companies. A June 2018 survey by consulting firm Aon indicates that 68 percent of 223 institutional investors have expressed concerns over board gender diversity in their proxy voting.

Large institutional investors have put a bulls-eye on all-male boards. It’s been nearly two years since SSGA erected “Fearless Girl,” the bronze statue featuring a young girl squaring off against Wall Street’s symbolic giant bull, in New York’s Financial District in 2017. The firm’s message to the then-787 companies with all-male boards was crystal clear: Add women to your ranks now.

More than 300 companies have since heeded the demand. The remainder suffered the firm’s rebuke: SSGA voted against the chair of the board’s nominating committee entrusted to select new members.

Unquestionably, board diversity is gaining momentum as a priority. “With California breaking ground as the first state to require publicly traded firms to place at least one woman on their board of directors by the end of 2019, we believe board diversity will be a prominent issue among longer-term and activist shareholders alike,” says Dana S. Grosser,

spokeswoman for Vanguard, which tallies more than $5.1 trillion in investment assets. “Boards without women will increasingly be perceived as outliers,” says Marc Goldstein, executive director and head of U.S. research at proxy advisory firm Institutional Shareholder Services (ISS). “Our surveys indicate that far fewer investors are willing to admit that a lack of board diversity is not a problem. Boards have to get it right.”

ISS’s 2018 survey reveals that more than 80 percent of investors consider all-male boards to be problematic, up from 69 percent the prior year. Small wonder many interviewees anticipate that broader representation of women in both leadership and governance ranks will guide much of the proxy discussions in 2019. At present, the U.S. ranks 14th among global markets in gender diversity in the boardroom, according to a 2018 U.S. Board Study reviewing diversity in the boardroom by ISS.

That may be changing, now that U.S. boards are compelled to voluntarily disclose more information about each director, such as their “skill matrices” across age, tenure, varied experiences, expertise and outside commitments. “It’s important for investors and shareholders to have more transparency about who sits on the board and whether that person’s diverse background, experiences and skill sets are aligned with the long-term strategic needs of the enterprise,” says Friso van der Oord, director of research at the National Association of Corporate Directors (NACD).

“Board members have to pick the best candidate—period,” says Stephen Kasnet, vice chair and lead director at both Granite Point Mortgage Trust and Two Harbors Investment Corporation. “If they’re required to pick one or more women, the challenge is to pick not the most capable women, but the most capable directors.”

Changing Climate

Also on the ESG agenda is climate change. EY’s survey of investors indicates that nearly eight in ten (79 percent) believe climate change is a significant risk, with 48 percent stating that enhanced reporting of these risks is a priority.

ISS has long supported shareholder proposals requesting companies to disclose information on their climate change risks. The subject reached a turning point in 2017 when Exxon, Occidental and PPL Corp. passed landmark resolutions to disclose the risks that climate change posed to their businesses. Goldstein projects that similar resolutions will be in the offing this year, “as investors succeed in achieving heightened disclosures of climate change risks on a company-by-company basis,” he says.

ISS recently aligned its policy with recommendations by the Task Force on Climate-Related Financial Disclosures. Chaired by Michael Bloomberg, the Task Force’s guidelines call for greater transparency and board oversight of climate change exposures. The effort paid off. “During the 2018 proxy season, more than 65 resolutions regarding climate change were submitted by U.S. investors, of which 17 involved risk assessments based on the so-called two-degree scenario,” Goldstein notes.

Two-degree scenario proposals are an outgrowth of the Intergovernmental Panel on Climate Change’s determination that two degrees Celsius above pre-industrial levels is the uppermost limit in global temperatures tolerable to the environment.

The proposals generally call for greater disclosure of companies’ preparedness for climate change and efforts to account for climate-related risks and opportunities. Despite rising investor interest, proactive measures by companies led to the withdrawal of the majority of two-degree scenario resolutions in 2018, says Keatinge. “We had awaited an onslaught of such resolutions, but they were withdrawn or negotiated away,” she says. “Ultimately, only five went to a vote, and of those five only two received majority support.”

That’s because many companies had already committed to enhancing disclosure. “This is heartening, and the credit is due the investors,” Keatinge adds.

Ron Schneider, director of corporate governance services at DFIN, agrees, noting that this represents a seismic shift in investor perspective. “When we saw shareholder proposals on climate change in the past, they didn’t get the support of the largest indexed investors,” Schneider says. “While these investors recognized that the climate was changing, they had difficulty connecting the impact to long-term financial performance and shareholder value. That has now changed.”

Sheehan from PJT Partners cites the value of water conservation to a soft drink manufacturer. “If a company is a water-intensive business, conserving water through processes that allow for the reuse of water, for instance, will have positive bottom line impact,” she explains. “That’s also good from an ESG standpoint. For a soft drink maker, these are not two separate silos.”

Social Sensibility

The opioid crisis, firearms and pay equity are also on investors’ ESG agenda. For example, following the February 2018 mass shootings in Parkland, Florida, multiple shareholder proposals asked that retail companies stop selling guns or part ways with the National Rifle Association. In response, Walmart and Dick’s Sporting Goods placed new restrictions on gun sales, while others like MetLife, Delta Airlines and FedEx voted to cut ties with or discontinue preferential treatment of the NRA.

“For some companies, gun violence is an issue of material importance,” says Keatinge. “An example is Sturm Ruger, whose shareholders at the 2018 annual meeting approved a proposal for the company to detail its plans to track violence associated with its firearms, disclose its research on ‘smart’ gun technology, and assess the risks that gun violence poses to its reputation and long-term financial performance.” Sturm Ruger has until February to produce a report addressing the shareholder’ concerns.

In 2018, a number of first-time shareholder proposals involved the opioid crisis. Keatinge and others anticipate more of the same this go-round. “Shareholders at Depomed (a distributor of opioid painkiller Nucynta now known as Assertio Therapeutics) voted in the majority to approve policies enhancing the board’s governance of the impact on the bottom line of continuing to distribute opioids,” Keatinge says.

These varied actions demonstrate the potential for investors and shareholders to, at a minimum, raise the level of dialogue around ESG issues—and possibly effect change.

It’s The Job Title Defining 21st Century Companies — But What Does CDO Mean?

By Russ Banham


Today, the current pace of data production is estimated to be 2.5 quintillion bytes per day. To get a sense of this expanse, 2.5 quintillion pennies would cover the Earth’s surface — five times — if laid out flat. This figure is only set to surge with developments in IoT. A 2017 IDC report predicted a ten-fold increase by 2025 — 30 percent of which will need real-time processing.

All of this data equals untapped business value, with innovations that make company operations more efficient, online interactions more seamless, and recruitment and hiring more aligned with business strategy. Yet given this exponential increase in business data, experts say it is wise for companies to hire for specific roles to manage its development and application.

“If you look at the meteoric rise in data from a business strategy perspective, you need to develop ways to collect data, cleanse and archive it, and apply analytics to it to extract insightful information,” said Jim Johnson, senior vice president of technology staffing services at Robert Half, a global job search and staffing firm. “This isn’t just `bits and bytes.’ You need people with the right skill sets who understand the data and digital transformation strategy to bring it to fruition.”

‘Jobs of Great Importance’

Entrusted to garner these benefits are CDOs — yet, here is where things amplify. The acronym comprises two types executive leadership roles — chief data officers and chief digital officers — each unique, yet both focused on data oversight, and in high demand as companies transform operations to understand, analyze, and monetize it for business purposes.

“They are both focused on data, realizing its core value as a business asset,” Johnson said. “But fundamentally they have different corporate missions.”

This explains the recent hiring spree for both types of CDOs. According to a 2018 Forrester survey, more than half of companies have appointed a chief data officer, and 18 percent are preparing to do so in the future. (Another survey by NewVantage Partners pegs the hiring percentage higher, with nearly two-thirds of companies hiring for the role — a considerable leap from the 12 percent that employed such executives in the firm’s 2012 survey.) Similarly, a PWC study indicated that 60 percent of chief digital officers were hired between 2015 and mid-2017.

“Every company is going through a digital and data transformation or darn well should be,” said Johnson, who helps companies hire for both types of CDOs. “This compels them to find people to lead these initiatives.”

The question is, what do these roles look like? (And does your organization need one?)

Chief Digital Officers: A Business-Savvy Bunch

In essence, chief digital officers own customer-related analytics and metrics. What’s more, they are responsible for harnessing these insights across the organization, developing digital approaches to solve business problems. An example in a manufacturing context is analyzing supplier data to determine ways to enhance supply chain efficiencies.

Chief digital officers typically have backgrounds in sales, marketing, or customer service, as well as knowledge of how to use innovative technologies to enhance business functions. These leaders generally partner with sales and marketing executives to streamline and improve the customer experience across mobile and digital channels, and at times will guide the development of marketing and advertising strategies.

These responsibilities, however, are evolving. “Chief digital officers are now being asked to take on the additional role of business information analyst,” Johnson said. “Given these newer job tasks, the best chief digital officers need to have significant strategic, financial, operational, and interpersonal communications skill sets, in addition to technological chops.”

Chief Data Officers: Data Science to Boot

By contrast, the chief data officer is charged with managing the access, analysis, and dissemination of data to effect better business outcomes. To do that requires transforming the enterprise around data to identify new product opportunities, enhance customer experience, improve regulatory compliance, spot competitive challenges earlier, and make operations leaner and more efficient.

“Anything that involves data can have an impact on financial performance,” Johnson explained. “We’re now at the point where technology is beginning to catch up to the massive volumes of structured and unstructured data being generated. What used to be stored in a database is now stored in the cloud, where it can be sliced and diced into insightful bits of knowledge.”

Given the focus on data and analytics, these CDOs often have backgrounds in mathematics, computer modeling, and data science. Typically, they lead the Data and Analytics department that is focused on achieving the corporate vision for enterprise-wide data access, use, and governance.

A recent Forbes Insights study shows this is no easy task — almost 80 percent of executive survey respondents said 40 percent or less of their organization’s data is available for sharing across the company. “It’s up to the CDO to make data available for use by all parts of the organization — to identify business opportunities and solve business problems,” said Johnson.

One example is the wide-ranging use of data in the insurance industry, where data drawn from image recognition software embedded into drones is improving the underwriting of homes and office structures, as well as data within claims filed by policyholders which can help customers mitigate potential losses.

An Evolving Role

The responsibilities of both CDOs are expected to become broader, as newer technologies like deep learning, blockchain, and the IoT burgeon. Business disruption is a constant today, requiring companies to stay on top of the latest digital and data trends — and the technologies that power them — to remain competitive.

This responsibility is the job of the CDO, both of them, albeit in different ways.

“Many CEOs perceive CDOs as `change agents’ — digital and data transformation specialists,” said Johnson. “Now and in the near future, it’s a job that holds great importance.”

Russ Banham is a Pulitzer-nominated financial journalist and best-selling author writing frequently about the intersection of business and technology.

Five Ways to Recruit and Retain Generation Z

By Russ Banham


They’re fresh-faced, smart, and headed your way. Gen Z, true digital natives born post-1996, are entering the workforce with skills and expectations that require rethinking current hiring and workforce paradigms.

For one thing, these entry-level workers are not all college graduates, as many Gen Z’ers are bypassing higher education to obtain prized internships and an early career start. The downside is their steeper learning curve due to their abridged educations, made more problematic by possible shortcomings in social skills, since technology has shaped much of their thinking and intellectual curiosity. As a recent Deloitte report stated, “a shortfall in highly cognitive social skills such as problem solving, critical thinking, and communication (could be) evident.”

Companies are challenged to position this young talent in a workforce comprised of multiple generations, given postponed retirements for Baby Boomers and the continuing presence of Gen X’ers and Millennials. That’s a lot of generations with different mindsets, cultures, and technological proficiencies to manage, which requires new job and workplace considerations.

A good start is knowing what Gen Z’ers want. According to a 2017 Medium survey of 5,000 Gen Z’ers, they seek an empowering work environment with autonomy. Other studies, including a survey by Dell Technologies that spoke to more than 12,000 Gen Z’ers from 17 countries, indicate that 80 percent aspire to work with cutting-edge technology: Nearly all (91 percent) said technology would influence their job choice, and 80 percent feel technology and automation will help create a more equitable work environment.

Interestingly, they also appear to favor face-to-face communications in their work interactions, despite the stereotype that Gen Z’ers are antisocial. Still, who (or rather, what) they work for is important. For the most part, Gen Z wants employment at companies that are socially conscious, with transparent stands on the environment, social, and governance issues.

As employers prepare to recruit, retain, and retrain Gen Z’ers, here are five moves to consider.

1. Help Them Chart Their Career Paths

Gen Z’ers often have a singular set of talents, such as social media proficiency. However, softer skills are more difficult to tease out of a resume or glean from in-person interviews.

To nurture both skill sets, recruiters should consider the value of tools that combine neuroscience games and artificial intelligence (AI) to evaluate a job applicant’s cognitive and emotional characteristics. Such tools, like Pymetrics, which uses AI to match talent, not only ferret out these traits, they also match employees with tasks they can perform well now. This gives employers time to target appropriate on-the-job training and upskilling to fill talent voids.

At AXA XL, for example, the large global insurer recently developed a self-assessment data analytics tool to help employees chart their own career paths. “As part of the onboarding experience, we request new hires score their proficiencies, from one to five, in 87 distinct skills,” said Henna Karna, AXA XL chief data officer. “With that data, we’re able to glean skill set gaps and their motivation areas for career growth, which then informs the need for related training and upskilling.” (The new tool will be rolled out in 2019.)

2. Consider Shared Workspaces

As technology natives, Gen Z’ers are also adept multitaskers, able to monitor inputs from diverse sources simultaneously. Office environments that inhibit their independent use of technology to solve problems are thus a bad fit. In these regards, many employers are moving toward shared workspaces, or hot desking.

Rather than confine a Gen Z’er to a single desk, allow them to roam freely and connect their work to a set of free desks. By designing a workspace with workstations dotting the landscape and giving Gen Z’ers (and everyone else) the opportunity to grab an open desk, they’re able to work with others — or when inspiration strikes — on the fly.

“The cubicle was created in the 1960s, and it provides a functional purpose. But work has changed drastically. The management styles and tools we use have evolved,” explained Dawn Longacre, Dell’s global workplace strategist. “Work is much more collaborative today, so we need to break down the walls.”

Shared workspaces allow for more spontaneous engagement with colleagues, the option to secure advice and counsel, and perhaps the ability to align interests with unexpected mentorship opportunities.

At Dell, leaders have converted virtually every space into a communal work environment, overhauling building terraces with chaise lounges, patio seating, and Wi-Fi outlets to work outdoors and soak up some vitamin D. Indoors, company cafeterias are revamped with coffee bar tables, picnic tables, benches, lounge areas, two-seater booths for private conversations and WiFi hookups.

3. Offer Empathy Workshops

While Baby Boomers grew up in hierarchical, military-style “command and control” work environments, Gen Z’ers tend to flourish in environments where they can execute tasks and solve problems on an independent basis.

“Tenured managers often are used to telling the people under their supervision what they need to do, as opposed to listening to what they’re interested in and good at doing,” said Cecile Alper-Leroux, a seasoned economic anthropologist who is vice president of human capital innovation at Ultimate Software, a leading provider of human capital management solutions.

“Gen Z’ers, on the other hand, are extremely good at independently solving whatever it is that managers need them to solve; they’ve been doing this with their smartphones since they were kids,” she said. “What they’re really interested in is a collaborative relationship with their managers, where they can have an exchange of ideas and dialogue.”

These different styles require a fair amount of give and take across the generations, and that’s where empathy workshops come in. “Empathy or openness workshops, in which managers and staff share their career interests, challenges, and goals offer a way to bridge generational differences,” Alper-Leroux said. “They also present the opportunity for everyone to participate in open communication, furthering workplace relationships and the goal of a more engaged workforce.”

4. Develop a Unique Innovation Sabbatical

Given the interest among Gen Z’ers to have an equitable work-life balance, employers should consider the plusses of innovation sabbaticals, whereby employees are given time off from their regular jobs to develop other projects. Free from their daily constraints, independent-minded Gen Z’ers can generate out-of-the-box innovations, fueling their own desire to innovate and, perhaps, enhancing corporate performance.

While this may sound cutting-edge, innovation sabbaticals have been around for nearly a half-century. 3M, a conglomerate of health care, consumer goods, and other businesses, is credited with developing the concept in 1974, calling it the “15 percent program.” The title references the 15 percent of paid time given to employees to pursue (so-called) outrageous ideas. Google followed suit with its “20 percent program,” which hatched such breakthrough concepts as Gmail and Google Earth.

A similar practice is in play at BlackLine, a publicly traded provider of financial and accounting software automation solutions. “We encourage employees to identify projects outside of their core responsibilities to broaden their experience,” said Susan Otto, BlackLine’s chief people officer. “This not only widens their perspective of the company, it also increases their understanding of cross-functional dependencies, and allows them to work on initiatives important to them and the company.”

5. Promote Cross-Pollination

A successful way to flatten any Gen Z learning curve is cross-pollination — assigning them to different parts of the business to get a crash course in how everything comes together.

BlackLine is in the process of developing a work study-abroad program designed to provide short term international opportunities and expand global growth. “As opportunities are presented, we’ll look internally to offer employees the chance to step out of their current roles for a period of time, to take on a different job assignment in one of the 10 countries where we currently operate,” Otto said. “Not only can they absorb a country’s unique culture and business practices, they will acquire a better appreciation for the work their colleagues perform abroad, aligning everyone in a truly unified enterprise.”

Her last point resonates. With so many different generations and work behaviors crowding companies, employee alignment in pursuit of the business vision is crucial to long-term financial performance.

Russ Banham is a Pulitzer-nominated financial journalist and best-selling author.

Out-Of-The-Box Approaches To Executive Education

By Russ Banham

Chief Executive

School days, school daze. Many CEOs are rethinking their expenditures on traditional executive development programs. It’s not that an MBA from a marquee institution has lost its value, it’s just that quantifying the impact on business outcomes is elusive. Sure, employees come away from the experience with enhanced knowledge, but applying it to business needs doesn’t happen overnight, given entrenched management structures and the demands of today’s fast-changing global business environment.

No surprise, then, that many companies are taking a fresh look. Led by CEOs who are firm believers in the value of continuous education, such organizations are sponsoring a wide range of internal programs customized to specific business needs and goals. Like more traditional Exec Ed, the programs are intended to fill skills gaps, but they’re also designed to produce more tangible results. By developing more perceptive and engaged employees, the programs improve productivity, reduce turnover rates, burnish a reputation as an employee-centric business and offer the opportunity to explore new business concepts inspired by employees’ exposure to new ideas and thinking.

These innovative approaches are many and diverse and can help companies inspire employees, address skill gaps and introduce new perspectives on an ongoing basis:

Peer networks.

Executives develop relationships and share benchmarks, best practices and advice in facilitated groups that meet in person several times a year and virtually as needed. (Chief Executive offers The Chief Executive Network and Senior Executive Network. Members are placed in industry-specific, revenue compatible, non-competing groups facilitated by trained experts to share innovative ideas, solve specific problems and uncover best practices. More information here.)

Leadership training seminars.

In most urban centers, top experts come to town to host seminars that get people out of their seats to role-play mock business circumstances, like what to do when a competitor brings out a disruptive new product.

Cross-mentorship programs. 

These harness the learnings of older employees who mentor younger ones, with a quid pro quo of getting some needed technology refreshers in return.

Temporary assignments abroad.

Employees absorb the nuances of a geographic market’s unique culture, laws, regulations and business practices by spending time at a company office or facility in a foreign country.

Cross-discipline job opportunities.

By taking on diverse positions across an enterprise for a short period of time, employees really learn the ropes, helping them become future leaders.

Open-door gatherings. 

People from up and down the corporate rungs come together as equals to share their experiences and ideas. Game-changing innovations are not confined to senior leadership ranks. The goal is to inspire all employees to bring their most brilliant musings to work.

“Education comes in many different ways,” says Therese Tucker, founder and CEO of BlackLine, a Los Angeles-based publicly traded provider of financial and accounting automation software. “We’re doing the traditional things like supporting the continuing education credits of the many CPAs we have here on staff but are also trying out other ideas—with great success.”

So is Russell Johnston, CEO of QBE North America, part of Australia’s QBE Insurance Group, one of the world’s top 20 general insurance companies. “We have a philosophy here of putting the employee at the center of all we do, even ahead of our customers and shareholders,” Johnston says. “We put a lot of effort into internal programs focused on leadership. Not only am I very passionate about leadership development, I’m a certified instructor on the subject.”

Other CEOs riding herd on novel education approaches include Gaurav Dhillon, co-founder and CEO of SnapLogic, a provider of data integration technology solutions; and Tom Wheelwright, who heads up WealthAbility, a Tempe, Arizona-based provider of wealth attainment strategies. Both are fans of conventional MBA programs and non–degree granting courses within business schools, but they also advocate for nontraditional continuous learning opportunities.

Cross-Generational Collaboration

At SnapLogic, CEO Gaurav Dhillon is a big believer in the value of cross-mentorship, where older and younger employees advise one another on their respective areas of expertise. These days, there’s plenty for Millennials and Generation Z to teach.

“Generationally, younger people are more adept in their use of technology and multi-tasking; it just goes with the territory,” says Dhillon. “They can text and listen to you at the same time. Older people in the workforce, on the other hand, have patience and prioritization capabilities—judgment that can only arrive after years of experience, including bad experiences. There’s great value in cross-mentorship programs in which each generation’s unique abilities come together.”

Wheelwright at WealthAbility also values employee education programs focused on ways that encourage more enjoyable and effective collaborations, as opposed to filling skills gaps on an employee-by-employee basis. “I’m a big believer in seminars where everyone participates as a member of a team,” he says.

Each year, the firm sponsors a two-and-one-half-day leadership seminar focused on personal development. “I remember a new hire here who dreaded having to go to the seminar, thinking it would be another lecture with flip charts,” says Wheelwright. “She was in horror at the thought of having to spend eight hours stuck in a room. However, once it was over, she said she couldn’t wait for tomorrow.”

Wheelwright leads the annual seminar, which involves role playing and other collaborative exercises drawn from Robert Kiyosaki, founder of the Rich Dad Co., a private financial education organization. Kiyosaki is the author of the Rich Dad Poor Dad series of personal finance books, which have sold more than 27 million copies worldwide. In 1985, he acquired Erhard Seminars Training and rebranded the controversial company as a business education firm focused on leadership. He and Wheelwright have collaborated on a book and are close friends.

“Robert’s seminars are not the usual ‘rah rah’ Tony Robbins stuff, not that there is anything wrong with that,” says Wheelwright. “Instead, his seminars are more transformational than instructional, more geared to getting people to engage with others in creative expression. We use his role-playing techniques to get employees to open up and freely express their wildest business ideas. Research indicates that people who participate in such exercises retain up to 90 percent of the experience after two weeks. This compares to 50 percent when engaging in discussions and 20 percent in instructional, lecture-type classes.”

While he encourages the workforce to pursue outside professional education opportunities and the attainment of an MBA, not all academic institutions provide worthwhile programs, Wheelwright says. “Since we’re in the wealth education and accounting business, we employ several CPAs who have to take 80 hours of continuing education classes every two years, which we fund in full,” he notes. “But nobody enjoys sitting in a room taking copious notes as some ‘talking-head’ instructor lectures on some esoteric subject, with a 50-slide PowerPoint presentation of bullet points in the background.”

Wheelwright also advocates the “personal replenishment” benefits afforded by sabbaticals. The company provides a paid, one-month sabbatical to employees, which can be added to their vacation time to “really unwind,” he adds, calling sabbaticals a “crucial way to achieve better work-life balance. Employees who remain here while their colleagues are on a sabbatical are told not to bother them unless the issue is critical. We don’t want them to feel stress while they’re away. We want them to rest their minds, absorb new experiences and come back with fresh perspectives.”

Finding Stellar Seminars

Johnston at QBE North America is a proponent of seminar training—if they’re the right fit. Prior to becoming the insurer’s CEO in May 2016, he was president of the $6 billion U.S. casualty division of the large international insurer AIG, which he previously served as COO.

During his time with AIG, he took a series of leadership seminars taught by Professors Jack Weber and Carol Weber (they’re married) at the University of Virginia’s Darden Graduate School of Business. “It was the most impactful course I’ve taken in my entire career,” he says.

Both the The Wall Street Journal and Financial Times rated the leadership seminars number one in terms of teaching quality and impact among all other university-based leadership development programs. Not only did the seminars give Johnston keen insights into people performance management, they inspired him to become a seminar instructor. “I’ve got a certificate hiding around here somewhere,” he says.

Johnston prefers seminars that take employees out of their comfort zones rather than those designed purely to increase technical competencies. “I have nothing against executive MBA programs, but they tend to dwell on business-as-usual instruction as opposed to looking outside one’s technical competency to grasp what is really going on in the world, in terms of innovative technologies, processes and new business structures,” he explains. “My philosophy of executive development is to hire technically competent people and then get them thinking about change management. I’m a passionate believer that to become an effective leader in today’s disruptive market environment, you need to develop people equipped to handle change and innovate.”

At QBE, this process begins the day a new hire attends the onboarding orientation. “The minute they come through the front door, we put them through a pretty rigorous leadership profile, in which we assess their leadership competency and cultural fit,” he explains. “This tells us where they need to improve and how we should go about it, through seminars for the most part.”

To spread institutional knowledge and stimulate the development of novel ideas, Johnston hosts a monthly morning coffee klatch with a random selection of employees from across the business, everyone from senior executives to administrative assistants. He asks the same question at every gathering—“What can we do to make the company better?”

“Since I’ve arrived here, I’d say half the things we’ve implemented literally came from the ideas that percolated at those morning sessions,” Johnston says.

Ultimately, it’s that idea-generating power that justifies devoting time and resources to executive education. Continuous learning can be a powerful tool in any CEO’s arsenal, particularly when grappling with the single biggest challenge facing leaders today—the need to reinvent their businesses on an ongoing basis.

Russ Banham is a Pulitzer-nominated financial journalist and a contributing writer to Chief Executive.

Three M&A Alternatives For Accelerating Your Digital Transformation

By Russ Banham

Chief Executive

Apprehensive over inferior mobile and online platforms, inefficient processes and frightful visions of becoming irrelevant, both tech and non-tech organizations are looking to leapfrog the innovation curve by acquiring proficiency in A.I., robotics, predictive analytics, machine learning, blockchain, natural language processing, image recognition software and the Internet of Things (IoT).

“You can look across literally every industry out there today and see both old-guard companies and new age businesses acquiring tech startups, from auto companies buying autonomous vehicle software providers to retailers buying e-commerce portals,” says Jason Flegel, partner and leader of Deloitte’s technology, media and telecom M&A practice. “Unequivocally, this is a major M&A trend with real lasting power.”

A recent survey conducted by Chief Executive with Tata Consultancy Services found 72 percent of CEO respondents considering either a merger, acquisition or divestiture in the next three years. Top drivers include new products or services, new markets and new business models. Since 2010, more than 21,800 tech startup exits—or points at which investors like VCs sell their stakes in a firm—have been tracked worldwide, representing a total deal value of about $1.2 trillion.

“Large technology companies have acquired smaller tech companies for years, but what is really eye-opening is the extraordinary number of non-tech companies closing deals,” says Marc Suidan, leader of PwC’s technology, media and telecom deals practice.

The result? Sky-high valuations, of course. Elvir Causevic, managing director and co-head of global investment bank Houlihan Lokey’s Tech+IPadvisory practice, says some of the leading tech startups that come to the firm to represent their sale “are seeing very high valuations in the market relative to historical multiples and expect the same for their own deals, especially if they have the best-in-class technology.”

He’s not kidding. 3-D printer maker Desktop Metal, for instance, took 1.79 years from founding to reach $1 billion, whereas autonomous driving startup Zoox took two and one-half years. Electronic scooter maker Bird jumped from a $400 million valuation to $1.2 billion in under three months.

Further clouding the picture is the tightening debt market. Cheap debt and ample private equity fueled the run in M&A. With rates rising and recent instability in the equity markets, the window for funding is narrowing and not every midsize buyer will be able to buy what it wants.

Luckily, there are other ways to round out your transformations. “Assuming the board and senior management identify what they need and why, they can generally get it in a joint venture or partnership, a licensing deal, or by poaching technology skill sets,” says Robert Hartwig, a professor of finance at the University of South Carolina’s Darla Moore School of Business. “M&A is not the only path toward digital and data transformation.”

Here are three examples of such alternatives, which serve as a prudent counterbalance to risky M&A transactions.

A License to License

Like many midsized companies, W.L. Gore & Associates must compete against larger business entities to acquire innovative startups. The 60-year-old global materials science company is circumventing this obstacle by licensing the patent for a groundbreaking technology.

Gore is primarily known for its GORE-TEX waterproof, breathable fabric membrane, but it also creates medical devices and products for the aerospace, pharmaceutical and mobile electronics industries. While it pursues traditional acquisitions, joint ventures and venture capital investments as part of its external growth strategy, the company also engages in novel patent licensing deals.

“Startups in the medical device space are very high-priced right now relative to historical multiples,” says Paul Fischer, who leads Gore’s corporate development organization. “We’re challenged in acquiring these organizations by larger publicly traded companies that can use their equity to purchase these companies. We need to deploy other creative strategies.”

By licensing an inventor’s patent on a breakthrough technology, Gore avoids the typical pitfalls of an acquisition, chief among them the cultural integration issues. The license also can be exploited to serve different needs. “We can use it to make things, but we can also use it to prevent others from making things,” Fischer says.

With regard to this defensive use, he explains that by owning the patent for a period of time, Gore can block a competitor from moving forward with a new product using the particular technology in its market space, or thwart the aims of a potential market entrant planning to do the same thing. “Patent licensing has become a large piece of our overall growth strategy,” says Fischer.

The drawbacks are relatively minor. While most patents can be licensed on a 20-year basis, some may require specific use restrictions. “Most times you can license it for whatever you want to use it for, but not always,” says Fischer. Another issue is whether the license provides exclusive use or non-exclusive use of the patent. In the latter case, other companies can also license the patent, limiting its effectiveness as a market barrier. “Universities that develop technologies that accept government research funding can’t provide exclusive licenses,” Fischer notes.

Still, there is more for CEOs to gain than lose from considering patent licensing as a means toward digital and data transformation. While a patent creates an effective market barrier, no company should license a patent it doesn’t expect to actually use.

“Thomas Jefferson opined that inventors of a patent should be incentivized for the betterment of society,” Fischer says. “If you keep it in the drawer, you’re squandering its value.”

Invest, Learn and Buy (Maybe)

Long disparaged as technology laggards, insurance companies are transforming operations from top to bottom using a variety of innovative digital and data technologies. In cases where these tools are not built in-house, they’re obtained from the more than 1,500 insurance technology startups that have sprouted like mushrooms over the past 10 years.

These nimble InsurTech startups seek to either compete against traditional insurers, sell to them, or be acquired by them. For insurers that opt to buy a startup, one way to lessen the risk of a failed deal is to invest in the company first, by way of forging a close partnership. This is the strategy of American Family Insurance (AmFam), a large, 90-year-old mutual insurance company providing property, casualty, health and life insurance products.

In 2014, AmFam launched a separate venture capital fund called American Family Ventures to scout promising investments in early-stage startups. Armed initially with a $50 million war chest, the fund eyeballs InsurTech startups from the seed capital phase through the Series B round, the point where investors take bigger stakes right before the company starts to scale.

“Our approach is to invest in these companies to build a strong relationship, give them some work and, if the time is right, at some point to consider an acquisition,” says Dan Reed, managing director of American Family Ventures. “If we get to this stage, it’s just a much easier conversation to have since we already know them well, and vice versa.”

While most of the fund’s investments don’t culminate in an acquisition, they nonetheless serve the purpose of providing access to innovative technologies that can be used in underwriting, claims management or other parts of the insurance value chain. A case in point is the fund’s investment in Hover, a startup with a machine learning application that homeowners use to assess damage to their property.

Once a homeowner clicks on the Hover app, they’re asked to take pictures of the damage. The tool then stitches together these images into a highly accurate 3-D model that includes accurate measurements for claims purposes. “We invested $4 million in the company, giving us a headstart for our own needs and theirs,” says Reed.

In scrutinizing startups, the fund looks for insurance innovators in product distribution, predictive analytics and the Internet of Things (such as developers of home automation and autonomous vehicle systems). Most investments result in the fund receiving 5 percent to 10 percent of equity in the startup.

Some investments turn into acquisitions. In 2013, the fund invested $5 million in Networked Insights, a startup that ingested social media posts on a massive scale. Using Natural Language Processing, a form of machine learning, the tool provides unique insights into customer experiences for marketing purposes.

“After we made the investment, we became one of their largest customers; I spent time on their board and helped them raise more money,” says Reed. “As they continued to build out their capabilities, they became a more meaningful partner to us.”

AmFam acquired the company last year, which is now part of its new digital transformation division.

Investments have been made in 54 startups since the fund’s launch. “Ten of these companies have been sold to the likes of Google, Amazon and Apple, providing good returns,” says Reed.

As for the remainder of its investments, the returns are measured in enhanced efficiencies and customer-focused innovations. Says Reed, “The ideas we’re getting from these companies are worth more than what we’ve invested.”

Locking Up a Joint Venture

Accuform, a developer and distributor of workplace safety signs, tags and labels, has acquired companies in the past, but when CEO Wayne Johnson wanted to develop a high-tech new product, he opted to take a different path. Accuform already offered lockout/tagout systems that ensure dangerous machines are properly shut off before employees service and maintain the equipment. Johnson’s idea was to create a digital locking system that would provide maintenance teams with real-time data over the Internet about the status of equipment repairs. “Electronic locks have been around for some time now, but nobody had designed one specifically for the safety applications of lockout/tagout,” he says.

He learned this fact by scouring the Internet for evidence of a small startup tech company actually making such locks, thinking of a possible acquisition. None surfaced. Lacking the tech resources to design the product in-house, he spoke with a friend who had recently sold his startup technology business and was looking for another venture to stick his teeth into. “I told him my idea and he was intrigued,” Johnson says. “He understood both safety and technology and happened to know a design group in Taiwan that could develop the Internet-enabled software inside the lock.”

The two men met with representatives of the Taiwanese company. This past summer the three parties inked a joint venture to make the novel electronic lockout-tagout system, which Accuform would brand and market. The Taiwan-based company is a silent partner in the deal. “We put together an MOU (Memo of Understanding) where we’d share in the design and development costs and split the eventual revenues,” explains Johnson.

The partners established a one-year deadline to bring the product to market in summer 2019. The Taiwan-based technology provider is in charge of design and development and will manufacture the product in Taiwan and ship it to Accuform in Brooksville, Florida. The other two partners are in charge of sales and marketing in North America and globally.

The design is now completed, and a prototype is ready for testing. Once the tests conclude, Accuform will reach out to the Occupational Safety and Health Administration (OSHA) for regulatory approval, since the agency’s current standards don’t address an electronic lockout-tagout. “The system is designed for user access via an electronic key as opposed to a traditional metal key that goes into a padlock,” Johnson explains. “It works just fine, and we have no qualms. We just need OSHA’s buyoff, which may involve some additional tinkering.”

From Johnson’s perspective, the arrangement is far more palatable than buying a company outright and attempting the messy business of integration. “From a risk standpoint, [acquisitions] are a much bigger bet than simply partnering up,” says Johnson. “A JV is much simpler and less risky.”

Russ Banham is a Pulitzer-nominated financial journalist and best-selling author.

Deep Learning Holds Promise for Much Longer Lives

By Russ Banham


The longest living human on record was Jeanne Calment, a French woman who died in 1997 at the age of 122. That’s a pretty sizable leap from today’s average life expectancy at birth in 2018 — 70 for males and 74 for females on a worldwide basis. But it’s peanuts compared to what may lie ahead.

Today, the possibility exists for people to achieve what scientists like Alex Zhavoronkov call “extreme longevity,” an age where people who pop off at 100 are mourned for dying too young.

We can thank (or blame) artificial intelligence for such long lifespans. As Zhavoronkov writes in “The Ageless Generation,” his new best-selling book, “In the not-too-distant future, medical science will possess the technology to slow and even reverse the aging process itself.”

Into the Deep

Zhavoronkov, who is the founder and CEO of Insilico Medicine, an AI and bioinformatics company focused exclusively on aging and age-related diseases, is keen to share the ways people can use emerging technologies to squeeze out quite a few extra years.

“By leveraging deep neural networks, we’re able to discover novel molecules targeting specific diseases and develop them into compounds for life-extending drugs,” he said. All of this, “at a vastly faster rate than is achievable in the current clinical trials process.”

A deep neural network is a form of deep learning, a subset of AI, that employs a multilayered system of artificial neurons to glean insights from massive amounts of data. The difference between artificial neurons and the ones we carry around in our brains is the skyrocketing speed by which the former acquire knowledge.

The data in this context relates to the aging process and the ability of different molecules to prevent or halt disease. By training the deep neural network to predict a particular person’s biological age based on genetic, environmental, and other relevant data, the specific reasons why the individual will likely live to be 80 years of age, but not much longer, are discerned. Once understood, the same tools are used to ferret out “correctives,” Zhavoronkov said.

These correctives are unique molecular compounds that offer a powerful opportunity to inhibit the manifestation of terminal diseases like cancer, Parkinson’s, or Alzheimer’s. By nipping such diseases in the bud, lifespans are extended. “Ideally, we want to prevent disease before it manifests,” Zhavoronkov said. “Several cancers have been cured in animals genetically similar to humans, substantially increasing the lifespans of worms, fruit flies, and mice. The problem is these compounds don’t translate well to humans.”

Deep neural networks narrow these odds. “We use massive repositories of what are called the `omics’ data, for genomics, proteomics, transcriptomics and metabolomics data,” he explained. “These biological molecules translate the structure, function, and dynamics of an organism, what we refer to as `gene expression’— a snapshot of all the genes.”

By training the deep neural networks to recognize the underlying signaling pathways in a disease, Zhavoronkov said, researchers can quickly identify the relevant biomarkers. From there, they can generate new molecular compounds with the properties that combat disease.

To get a sense of what he means by a “massive” repository of omics data, the total number of small molecules alone is estimated to be between 10⁶⁰ and 10²⁰⁰. For humans alone to study and test this many molecules as potential drug compounds would take an eternity — the equivalent of looking for a needle in a giant haystack. But AI, in this case deep neural learning, is really good at finding the needles.

“By mining the biological data, we hope to progress pretty quickly toward developing a chemical compound to hit the target and prevent the disease from manifesting,” he said. “Regenerative medicine is in a far more advanced state than most people realize. The pieces are coming together.”

Scratching the Surface

Although Zhavoronkov declined to estimate how long someone born today can expect to live, he’s confident we’ve not yet reached our biological lifespan limit.

2018 study published in the Science journal suggests that a fixed limit to the human lifespan has yet to be ascertained. The study indicates that people at age 100 have the same 50 percent chance of dying within a year as people between the ages of 105 and 109; they also share the opportunity to live another 1.5 years. In other words, the risk of death at extreme ages seems to plateau, giving hope that we’ve yet to reach an expiration date.

Some scientists like Aubrey De Grey, chief science officer at the SENS Research Foundation, project that human lifespans will extend into the hundreds and possibly in excess of 1,000 years. The thinking is that by the time someone born today makes it to 100 years, the state of AI and medical science will have advanced to the point where the person can conceivably live to the age of 150. Fifty years later, the same progression occurs, lifting the individual’s chances of making it to 200. And so on.

“I don’t like to speculate on how long we can hope to live, but I will say that human beings have a way of setting seemingly impossible goals like landing on the moon and then pulling it off,” Zhavoronkov said. “In that regard, I would aspire to live young as long as possible.”

A Dry Well?

Living to a ripe old age is not without complications. For one thing, the world could become as crowded as a Los Angeles highway at rush hour. Long-term care is another dilemma. In the past, it was common for children to take care of their aging parents. However, that was when the children were in their 50s and 60s and their parents were in their 80s and 90s — not 120-plus.

Another sobering consideration is how society will support masses of people living only half their lifespan by the time they retire. An analysis by the International Monetary Fund indicated that if people live a mere three years longer than expected, pension-related costs in both advanced and emerging economics could increase by 50 percent.

“Science is allowing us to live longer, but not necessarily with an improved quality of life,” said Robert Hartwig, a professor of finance at the University of South Carolina’s Darla Moore School of Business.

In Hartwig’s class on health and life insurance, he describes longevity risk as one of the greatest global challenges yet to be addressed in meaningful ways. “If you look at the age people generally retire today through the lens of their savings, pension, and Social Security income, and consider the possibility they may live decades and decades longer, you realize the well will soon be dry,” he said. “There are simply not enough economic resources available on the planet to allocate to hordes of people living to 120 or 150.”

Hartwig concludes the class by asking his Gen Z students when they expect to retire; the majority respond around the age of 55. “Working much longer than one anticipates is needed to offset a disaster in the making,” Hartwig warned.

But is it fair to expect a construction worker, truck driver, or other laborer to work well in their 80s and 90s? “Society needs to create job opportunities for individuals who historically would have transitioned out of the workplace and into retirement,” said Hartwig. “This could entail retirement at current expectations, followed by a period of reeducation and a less toilsome occupation.”

To incentivize people to work beyond their retirement years, Zhavoronkov suggested changing the age of eligibility to receive pensions and unpenalized access to 401k savings and Social Security benefits. “Although people will object,” he said, “in time, the new age of retirement will become accepted as the norm.”

In other words, the time to prepare for a very long life is now.

Virtual House Calls

By Russ Banham

Leader’s Edge

Time is of the essence in the agricultural industry, given the short window of opportunity to harvest fruits and vegetables in their peak condition. When workers feel under the weather from a cold or stomach virus, they typically have to drive a long distance to a medical clinic or hospital emergency room for treatment. Time is spent waiting in the facility. An entire day’s work can be squandered.

United Agricultural Benefits Trust (known as UnitedAg) is familiar with the productivity pains caused by an employee’s lost time from work. The associated healthcare plan, composed of more than 700 agricultural employer groups with 42,000 members spread across California’s vast agricultural industry, sought a way for workers to receive more expeditious care at the same or higher quality and at lower employer cost.

Telemedicine (also called telehealth) was the solution.

“Our members didn’t have good access to healthcare in the rural environments where they worked,” says Christopher McDonald, UnitedAg’s chief innovation officer. “They also tend to carpool to work. This meant someone feeling ill might not have a car to drive to a clinic. So they end up taking a full day off for a medical condition that could easily be addressed with a simple prescription.”

UnitedAg signed a contract with Teladoc, one of the top telemedicine providers in the fast-growing virtual healthcare space. Telemedicine is on-demand healthcare provided remotely by a doctor, nurse practitioner, registered nurse or other medical specialist. Access is within minutes. Employees with a bad headache, bladder infection or more than 50 other low-acuity (read: non-life-threatening) illnesses log onto an application and communicate their concerns to a medical specialist, who prescribes treatment.

Depending on the telemedicine provider, the back-and-forth online consultation may involve video (like Skype), a phone conversation with uploaded photos, a question-and-answer written exchange—or all of the above.

Not only is this fast-track process more humane for the injured or ill employee, it may sharply reduce the cost of employer-provided healthcare.

“By deferring a visit to an emergency room or a walk-in clinic, hundreds of dollars are shaved off each time,” says Peter McClennen, Teladoc’s president.

Add up those deferred visits, and companies with a large employee population can save a bundle. Other employer benefits include reduced absenteeism and a related uptick in productivity.

“Telemedicine also makes employees feel their employer values them, which increases their engagement levels, improving job retention,” says Aamir Rehman, M.D. and head of clinical services for the United States at employee benefits provider Mercer.

So it’s small wonder that virtual healthcare is taking off. According to Mercer, 71% of employers with 500 or more employees offered telemedicine services in 2017, up sharply from the 59% that offered the services in 2016. “It’s just exploding,” Rehman says.

Spiraling out of Control

Today’s healthcare system is in disarray, with competing agendas in Congress and no clear consensus on an optimal solution. “The volume of research papers today on healthcare is outsized—literally hundreds of papers published per day,” McClennen says.

Meanwhile, the average total health benefit expense per employee keeps creeping up for employers—from 2.4% of revenues in 2016 to 2.6% in 2017. Deductibles in traditional preferred provider organization plans also continue to rise, reaching nearly $1,000 on average in 2017 for employers with 500 or more employees and nearly $2,000 for companies with 10 to 499 employees.

Other examples of rising healthcare costs include:

  • Americans pay $858 on average for their prescriptions, compared to $400 per person across 19 other industrialized nations.
  • Doctor-dispensed drugs cost 60% to 300% more than medicines distributed at retail pharmacies.
  • Average annual salaries for nearly all physician specialties increased between 11% and 21% in 2016.
  • The cost of emergency room visits can reach well into the thousands of dollars.
  • Many ER visits are unnecessary, the medical condition easily treated with over-the-counter medications or a visit to a less expensive walk-in clinic.
  • Nearly half (46%) of physicians mandated by law to digitize patient records have spent more than $100,000 each to implement an electronic health record system.

These various expenses trickle down to affect the overall cost of healthcare for employers and everybody else. Telemedicine offers a way to trim the excess fat, while providing much-valued access and convenience to employees.

Tomorrow’s Healthcare Today

Think of telemedicine as a walk-in clinic without the walking. By all accounts, it appears to be the least expensive option to treat many low-acuity ailments such as bronchitis, athlete’s foot, deer tick bites, pink eye, laryngitis, and sinus, yeast and ear infections. That’s because employees don’t have to make a time-consuming trip to the ER, a walk-in clinic or a doctor’s office to treat such conditions.

“A key driver of telemedicine is to prevent overuse of the ER,” says Tim Smith, a principal at Deloitte, where he is the national leader for the consulting firm’s healthcare information technology practice. “If someone needs a prescription for penicillin because they have a rash, the person does not need to sit for three hours in an emergency room to be handed a piece of paper. With telemedicine, a doctor or nurse practitioner can immediately diagnose the rash and route the prescription to a local pharmacy for the person to pick up at lunch or on the way home from work.”

Telemedicine also puts injured or ill employees in the driver’s seat when it comes to their care. “Historically, if I wanted to see my doctor, I had to make an appointment when it was convenient for the doctor,” Rehman says. “With telemedicine, the doctor sees me at my convenience. For employees at work, this is a great alternative. They don’t have to leave work, drive to the care provider, and wait around in a waiting room for who knows how long. The physical barrier to providing care has been removed.”

Many telemedicine providers offer services beyond low-acuity medical conditions, such as providing dermatology and psychological care. Although the companies price their services differently, most charge a specific fee for a consultation with a medical specialist. UnitedAg, for instance, receives electronic data from Teladoc notifying it that one of its employee members consulted with the provider.

“The fee is well under what a regular doctor’s office or clinic charges,” says McDonald. “We also paid a one-time fee to set up the exchange between their system and ours.” He preferred to keep these amounts proprietary, noting they were negotiated with Teladoc.

The big question about telemedicine is whether the quality of care is on par with or better or worse than seeing a physician in person. Rehman seems to lean toward “on par.”

“As a doctor, when a patient comes to me with a sore throat, I examine the person to see if there might be something else going on,” he explains. “This might indicate that a physical visit is superior to a virtual one.

But we’ve surveyed our clients’ employees about this, and the reality is their doctors spend very little time with them in the examination room. It was painful for me as a physician to read these responses.”

He adds, “The reality is that with telemedicine, patients aren’t giving up much, since their doctors tend to give them so little time anyway.”

Telemedicine, in fact, may be a better alternative to walk-in clinics.

“The quality of care in telemedicine outpaces brick-and-mortar clinics because everything is documented,” Rehman says. “If the patient is prescribed a medication, that person’s personal physician and healthcare provider receive this information electronically. Not all walk-in clinics have this capability.”

That’s not good. Rehman provided an example of a patient who receives a prescription from a nurse practitioner at a walk-in clinic that may exceed the dosage the person’s physician would have recommended, given the patient’s other medical conditions and prescriptions.

“With telemedicine, the patient’s personal physician is alerted immediately to the new prescription, whereas this may fall through the cracks at a clinic,” Rehman says. “If there is a problem, it can be quickly discerned and solved.”

Several studies indicate virtual care has its plusses and minuses. A 2016 Rand Corporation study indicated the ease of telemedicine consultations actually resulted in overuse, increasing the use of healthcare. A 2013 study published in the Archives of Internal Medicine, comparing telemedicine with face-to-face examinations of patients with sinusitis and urinary tract infections, confirmed the traditional benefits of telemedicine—convenience, avoidance of travel time, and lower costs—but found that telemedicine providers had prescribed antibiotics at a higher rate for sinusitis than did other doctors. And the benefit of antibiotics for sinusitis is unclear.

One can argue this research is four years old—antiquated given today’s blistering pace of technological development. In the interim, video sharing, digital technology and data analytics software have improved markedly, possibly moderating the tendency to overprescribe.

A New Service Line of Business

Telemedicine appears to be a cost-effective and highly valued employee benefit for insurance brokers to present to commercial clients.

“We’re very bullish on this concept of delivering healthcare, as I am personally,” says Deloitte’s Smith. “The technology now exists for patients to have much more interactive conversations with quality caregivers using video and other visual tools. Ten years from now, sitting in a waiting room will be passé.”

Many brokers are already partnering with a telemedicine provider (or several) to offer the product to clients. Aon is a case in point.

“The future of healthcare will be driven by people taking ownership of their well-being, and telemedicine enables this type of behavior,” says Ted Cadmus, senior vice president and a local practice leader in Aon’s health and benefits practice. “Right now too many people go to the ER for things like a sinus infection or a cold, which eats up capital and human resources and does tremendous disservice to the individual…. Telemedicine fits beautifully in our fast-paced, mobile technology world.”

Teladoc’s McClennen agrees that brokers have a lot to gain from presenting telemedicine as an additional employee benefit.

“Undoubtedly, the early movers will have a leading edge, given the trend toward virtual care,” he says. “Eight years ago, a company like ours didn’t exist, but neither did Uber. The world is changing. We’re able to bring all the pieces involved in patient care to employees in an automated, mobile way, making access to care easier and more satisfying.”

While Cadmus believes younger employees are most likely to pursue virtual interactions with care providers, in time every employee will do the same.

“Some older baby boomers who are used to face-to-face doctor visits might still prefer that form of interaction,” he says, “but as they retire, telemedicine and other forms of virtual healthcare, like remote monitoring of patients, will be the primary means for treating diverse medical conditions.”

By remote monitoring, Cadmus is referring to digital technologies that collect medical data from individuals remotely to interpret and monitor their heart rate, blood pressure, blood sugar and other personal health data. Like telemedicine, this component of virtual healthcare is predicated upon reducing visits and readmissions to an ER, clinic or doctor’s office, improving patient quality of life while containing costs across the continuum of care.

These savings can be substantial. Mercer’s study indicates a typical telemedicine consultation costs less than $50, whereas the average office visit costs about $125. And a 2017 study by the online journal Value in Health suggests telemedicine consultations at the University of California Davis saved patients nearly nine years of travel time, five million miles and $3 million in costs.

Another study by Accenture found 78% of consumers are interested in receiving virtual health services. A study by Deloitte came to a similar conclusion, finding 74% would use telemedicine services if they were available at work. Meanwhile, 70% of the respondents said they were “comfortable” with consulting about their medical issue with a medical specialist via text, email or video.

Employers are not deaf to this growing interest. About 90% of large employers said they would offer telemedicine as part of their employee health plans in 2017, according to a 2016 National Business Group on Health survey. Altogether, the virtual healthcare market is expected to reach $3.5 billion in revenues by 2020.

“Healthcare is fast becoming one of the most automated industries in the world, making care easier to access and less expensive to acquire,” says McClennen.

All this makes telemedicine an enticing opportunity for brokers. Clients can obtain the aforementioned benefits—increased employee engagement, higher workforce productivity, improved care quality and lower healthcare use—at a much lower cost.

“Depending on the health plan provided by the employer, telemedicine may be a free add-on,” Cadmus says.

Aon has brokered deals for multiple commercial clients involving telemedicine providers Teladoc and American Well. “Which provider we choose depends on the client’s healthcare plan,” says Cadmus. “American Well may be right for one client, whereas another telemedicine provider may be right for a different client. We’re not locked in to any one of them. We play the role of third-party expert for our clients, identifying the solution that’s best for their employee population.”

For this service, Aon receives a commission from the provider on the dollars of business placed, although the firm also has charged fees, depending on the arrangement.

“This isn’t about money anyway,” Cadmus maintains. “The motivating force for us is to clearly demonstrate (to clients) that we’re thinking ahead toward their best interests—always in front of the next technological curve. Right now telemedicine fits this bill.”

Banham is a Pulitzer Prize-nominated author and insurance journalist.