Measuring health is important for many reasons. It can help doctors and scientists understand the risk of medical problems and develop prevention strategies that can improve patient care. Monitoring health status can also help economists understand financial outcomes and help policymakers identify the likelihood of people needing caregiver assistance or retiring early, life events that can have implications for programs such as Social Security, Medicare, and Medicaid.
Further, measuring health is essential for assessing the return on U.S. health care spending which is large—close to one fifth of U.S. gross domestic product—and growing. In the United States, people usually take surveys that allow assessment of physical well-being. Self-assessments of health can help forecast life expectancy and functional ability, and whether a person may require medical care at some point in the future. However, in some cases, a better measure of health than self-assessments might be necessary.
Enter the frailty index
In June 2019, the Atlanta Fed published a working paper cowritten by Karen Kopecky, a Federal Reserve Bank of Atlanta research economist and associate adviser. Kopecky and her coauthors discussed the frailty index, an alternative method of evaluating health. This measure, pioneered by researchers at Dalhousie University in Halifax, Nova Scotia, focuses on the total number of health ailments a person has and the nature of those problems.
Kopecky worked with researchers Roozbeh Hosseini, a visiting scholar at the Atlanta Fed who is also an assistant professor at the University of Georgia, and Kai Zhao, associate economics professor at the University of Connecticut, to create frailty indexes using three surveys of Americans that include a host of questions on various aspects of health conditions.
A key finding of the researchers’ work was that the proportion of individuals in the U.S. population in good health decreases faster as people age when well-being is measured with the frailty index rather than with individual self-assessments. “For this reason the frailty index is an especially good measure for studying how health evolves with age,” Kopecky said.
The architecture of the frailty index helps explain why it can be a better predictor of health during aging. The index combines information from a range of questions about an individual’s specific health ailments to provide a summary of the person’s overall well-being. Kopecky and her colleagues used 27 health variables to construct a frailty index for a sample of more than 18,500 Americans who responded to the Panel Study of Income Dynamics (PSID) from 2003 to 2015.
The survey includes questions on specific medical conditions and activities of daily living. The variables the researchers looked at include difficulty with activities such as eating, dressing, walking, managing money, and getting in and out of bed, as well as the presence of conditions including cancer, diabetes, heart attack, stroke, and loss of memory.
The researchers derived the index by adding the total number of variables reported as ailments by an individual, then dividing that sum by the total amount of variables observed for that person overall in the year. The index captured expected variation in health: frailty was higher in older age groups compared with younger ones. Further, the sample showed that increases in frailty over time were three times more common than decreases.
Kopecky and her coauthors also compared the state of health over time using the frailty index with self-reported health status by making calculations based on the percentage of respondents in the PSID survey who self-reported their health as “excellent,” “very good,” “good,” “fair,” or “poor.” Their analysis found that when health is measured by frailty, the proportion of individuals with excellent or very good health declines faster with age.
They set cutoff values for frailty based on the distribution of self-reported health of 25- to 29-year-olds. When the cutoff values and frailty were used to determine individuals’ health categories as opposed to self-reported health, the researchers observed that health deteriorated much more rapidly with age.
In other words, the analysis showed that the fraction of people with poor self-reported health status rose with age, but when they measured health by frailty, they observed a much faster rise than with the self-reports (see the charts). For example, only 17 percent of people aged 70 to 74 had a frailty index low enough to fall into the “excellent” or “very good” health category. That compares with 39 percent of 70- to 74-year-olds who self-reported their health as “excellent” or “very good.”
“We interpret these patterns as evidence that self-reported health status underestimates the decline in observable health,” the paper says. The researchers also found that the frailty index was a better predictor than self-reported health of mortality and the probability that a person would enter a nursing home or become dependent on Social Security Disability Insurance.
Individuals’ self-assessments not always reliable
One reason frailty may be a better gauge of health than self-assessments has to do with the subjective nature of individuals’ judgments of their well-being, Kopecky said. “People tend to compare themselves to others their age” in self-reporting their health condition rather than considering how their present medical status compares with their past state, she said.
“People seem to be readjusting their self-reported health. So if you really want to map out how health evolves as people age, subjective measures don’t work well.” That isn’t to say that self-reported health information doesn’t have value. It can play a role in helping researchers understand the variation of health within an age group, Kopecky said. She added that self-reported data can also help uncover private medical information that a frailty index would not easily discern, such as hereditary conditions that may put individuals at risk for certain diseases.
Kopecky said the frailty index model holds much potential in economics. It can provide insight into such matters as the effect of health on a person’s earnings over time, a country’s labor supply, and individual consumption patterns. “It’s a step in the right direction in terms of improving our way of measuring health and as a result being able to understand how health interacts with economic variables and models,” she said.
There’s good news and weird news when it comes to age-friendly jobs in America. The good news, according to a recent research paper, “The Rise of Age-Friendly Jobs,” by three noted economists, is that between 1990 and 2020, roughly three-quarters of U.S. occupations increased their age-friendliness.
Specifically, employment in what these economists call “above-average age-friendly occupations” rose by 49 million over that 30-year period.
A Head-Scratching Finding
“I thought there’d be an increase in the number of age-friendly jobs, but I was staggered at how big the increase was,” says Andrew Scott, a London Business School economics professor who wrote the paper with MIT’s Daron Acemoglu and Nicolaj Søndergaard Mühlbach of the McKinsey consulting firm.
Now for the weird part: You’d expect that older workers — age 50 and older — would be the big beneficiaries of the rise in age-friendly jobs, but they aren’t. Despite a 33.1 million rise in people employed in the most age-friendly occupations from 1990 to 2020, only 15.2 million were workers over 50.
The paper’s researchers found that age-friendly jobs have disproportionately gone to younger women and college graduates of all ages.
“Non-college grads, and particularly male non-grads, are losing out. They tend to work in the least age-friendly jobs,” says Scott, co-author of “The 100-Year Life” and “The New Long Life.”
What’s going on here?
Definition of ‘Age-Friendly’ Jobs
To answer that, it helps to know what the three economists say makes a job “age-friendly.”
To create their Age-Friendliness Index, they matched nine job attributes (schedule flexibility, telecommuting, physical demands, the pace of work, autonomy, paid time off, working in teams, job training and meaningful work) with data from the U.S. Department of Labor’s O*NET program, the nation’s primary source of occupational information.
The Index calculated the age-friendliness of jobs based on earlier studies showing that older workers generally prefer jobs with greater autonomy, more moderate physical activity, flexible schedules, shorter commutes and the opportunity to telecommute. In one study, older workers said they’d be willing to accept a 20% cut in pay in exchange for flexible work.
You can probably see where this is all going.
Age-Friendly or Worker-Friendly?
Age-friendly jobs are actually worker-friendly jobs. The most age-friendly jobs in 2020, researchers found, included HR managers, insurance salespeople, receptionists and reservations agents.
Some “age-friendly” job factors are especially attractive to women. “Scheduling flexibility has, I think, a great benefit for people who are also the primary caregivers of their children or their older parents,” says Debra Sabatini Hennelly, founder and president of the workplace consulting firm Resiliti.
Hennelly, an advocate for older workers, recently co-wrote a Harvard Business Review article titled “Bridging Generational Divides in Your Workplace” with Bradley Schurman, author of “The Super Age.”
How to Be Age-Friendly
In the article, Hennelly and Schurman said businesses let too many older workers go during the pandemic due to “early retirements, a caustic mix of ageism and cost-cutting measures.”
One takeaway from Scott’s age-friendly-jobs paper: There’s a significant difference between age-friendly jobs and age-friendly employers.
Employers are increasingly offering age-friendly jobs to attract and keep workers of all ages. But being an age-friendly employer means being willing to hire, train and retain people over 50; to encourage a multigenerational workforce; and to avoid ageist policies and practices.
Scott was cheered by rapid growth in age-friendly occupations but told me he believes much more work is needed to cultivate age-friendly employers.
“We’ve seen an evolution in the space toward more age-friendly jobs, but not a great success in firms switching to become age-friendly employers,” he says.
Tips for Older Job Seekers
Scott and Hennelly shared two pieces of advice to job hunters over 50 due to the growth of age-friendly jobs but the inadequate number of age-friendly employers:
1. Research whether an employer or industry you’re considering truly is age friendly. Research whether it practices age discrimination in job hiring and retention. “Is there an occupation that has less age discrimination?” says Scott.
Hennelly suggests a surprising way to unearth this kind of information: look at employers’ ESG reports online. ESG stands for Environmental, Social and Governance; these types of reports were formerly called corporate social responsibility or sustainability reports.
“The S in ESG is all about how you treat your workforce, right?” asks Hennelly. “There are ESG reports now that specifically call out age inclusivity.”
But Hennelly notes, you’re more likely to find U.K. and European companies noting their age inclusivity in their ESG reports than U.S. companies. You can find these reports through an online search.
Also, review AARP’s list of 1,000+ employers that have taken its “age-friendly pledge” and the hundreds in the Age-Friendly Institute’s “Certified Age-Friendly Employer Program.”
2. When interviewing for a job, be proactive about the advantages you’d bring to the employer as an older worker. “The number one business argument I would make is that you can help address the employer’s turnover problem,” says Hennelly. Note that if you get the job, you plan to stay a long time.
“Mature workers are less likely to change jobs compared to younger workers,” according to a recent OECD report, “Retaining Talent at All Ages.”
Hennelly says hiring older workers with wisdom and experience lets younger workers learn from them and avoid mistakes the 50+ workers have seen — or even made — in the past.
If the hiring manager assumes you are “overqualified” or that you will demand too much pay, “head that off,” she advises. “Just say, ‘I’ll bet you’re wondering why someone with my level of expertise would be interested in this job.'”
Why You Might Work for Less
In their Harvard Business Review article, Hennelly and Schurman urge employers to “recognize and encourage younger managers who don’t write off older candidates as ‘overqualified’ or question why they would apply for a role that seems to be ‘beneath’ them.”
If you’re willing to work for less money than your last job — perhaps in exchange for less management responsibility — say so, advises Hennelly.
Hiring more older workers who are willing to accept less pay helps employers rein in labor costs, Scott says. In the big picture, he adds, reduced pressure to raise employee wages can lead to lower inflation nationally.
A new study in Nature Sustainability incorporates the damages that climate change does to healthy ecosystems into standard climate-economics models. The key finding in the study by Bernardo Bastien-Olvera and Frances Moore from the University of California at Davis:
The models have been underestimating the cost of climate damages to society by a factor of more than five. Their study concludes that the most cost-effective emissions pathway results in just 1.5 degrees Celsius (2.7 degrees Fahrenheit) additional global warming by 2100, consistent with the “aspirational” objective of the 2015 Paris Climate Agreement.
Models that combine climate science and economics, called “integrated assessment models” (IAMs), are critical tools in developing and implementing climate policies and regulations.
In 2010, an Obama administration governmental interagency working group used IAMs to establish the social cost of carbon – the first federal estimates of climate damage costs caused by carbon pollution. That number guides federal agencies required to consider the costs and benefits of proposed regulations.
Economic models of climate have long been criticized by those convinced they underestimate the costs of climate damages, in some cases to a degree that climate scientists consider absurd. Given the importance of the social cost of carbon to federal rulemaking, some critics have complained that the Trump EPA used what they see as creative accounting to slash the government’s estimate of the number. In one of his inauguration day Executive Orders, President Biden established a new Interagency Working Group to re-evaluate the social cost of all greenhouse gases.
IAMs often have long been criticized by those convinced they underestimate the costs of climate damages, in some cases to a degree that climate scientists consider absurd. Perhaps the most prominent IAM is the Dynamic Integrated Climate-Economy (DICE) model, for which its creator, William Nordhaus, was awarded the 2018 Nobel Prize in Economic Sciences.
Judging by DICE, the economically optimal carbon emissions pathway – that is, the pathway considered most cost-effective – would lead to a warming increase of more than 3°C (5.4°F) from pre-industrial temperatures by 2100 (under a 3% discount rate). IPCC has reported that reaching this level of further warming could likely result in severe consequences, including substantial species extinctions and very high risks of food supply instabilities.
In their Nature Sustainability study, the UC Davis researchers find that when natural capital is incorporated into the models, the emissions pathway that yields the best outcome for the global economy is more consistent with the dangerous risks posed by continued global warming described in the published climate science literature.
Accounting for climate change degrading of natural capital
Natural capital includes elements of nature that produce value to people either directly or indirectly. “DICE models economic production as a function of generic capital and labor,” Moore explained via email. “If instead you think natural capital plays some distinct role in economic production, and that climate change will disproportionately affect natural capital, then the economic implications are much larger than if you just roll everything together and allow damage to affect output.”
Bastien-Olvera offered an analogy to explain the incorporation of natural capital into the models: “The standard approach looks at how climate change is damaging ‘the fruit of the tree’ (market goods); we are looking at how climate change is damaging the ‘tree’ itself (natural capital).” In an adaptation of DICE they call “GreenDICE,” the authors incorporated climate impacts on natural capital via three pathways:
The first pathway accounts for the direct influence of natural capital on market goods. Some industries like timber, agriculture, and fisheries are heavily dependent on natural capital, but all goods produced in the economy rely on these natural resources to some degree.
According to GreenDICE, this pathway alone more than doubles the model’s central estimate of the social cost of carbon in 2020 from $28 per ton in the standard DICE model to $72 per ton, and the new economically optimal pathway would have society limit global warming to 2.2°C (4°F) above pre-industrial temperatures by 2100.
The second pathway incorporates ecosystem services that don’t directly feed into market goods. Examples are the flood protection provided by a healthy mangrove forest, or the recreational benefits provided by natural places.
In the study, this second pathway nearly doubles the social cost of carbon once again, to $133 per ton in 2020, and it lowers the most cost-effective pathway to 1.8°C (3.2°F) by 2100. Finally, the third pathway includes non-use values, which incorporate the value people place on species or natural places, regardless of any good they produce. The most difficult to quantify, this pathway could be measured, for instance, by asking people how much they would be willing to pay to save one of these species from extinction.
In GreenDICE, non-use values increase the social cost of carbon to $160 per ton of carbon dioxide in 2020 (rising to about $300 in 2050 and $670 per ton in 2100) and limit global warming to about 1.5°C (2.8°F) by 2100 in the new economically optimal emissions pathway. (Note for economics wonks – the model runs used a 1.5% pure rate of time preference.)
Climate economics findings increasingly reinforce Paris targets
It may come as no surprise that destabilizing Earth’s climate would be a costly proposition, but key IAMs have suggested otherwise. Based on the new Nature Sustainability study, the models have been missing the substantial value of natural capital associated with healthy ecosystems that are being degraded by climate change.
Columbia University economist Noah Kaufman, not involved in the study, noted via email that as long as federal agencies use the social cost of carbon in IAMs for rulemaking cost-benefit analyses, efforts like GreenDICE are important to improving those estimates. According to Kaufman, many papers (including one he authored a decade ago) have tried to improve IAMs by following a similar recipe: “start with DICE => find an important problem => improve the methodology => produce a (usually much higher) social cost of carbon.”
For example, several other papers published in recent years, including one authored by Moore, have suggested that, because they neglect ways that climate change will slow economic growth, IAMs may also be significantly underestimating climate damage costs. Poorer countries – often located in already-hot climates near the equator, with economies relying most heavily on natural capital, and lacking resources to adapt to climate change – are the most vulnerable to its damages, despite their being the least responsible for the carbon pollution causing the climate crisis.
Another recent study in Nature Climate Change updated the climate science and economics assumptions in DICE and similarly concluded that the most cost-effective emissions pathway would limit global warming to less than 2°C (3.6°F) by 2100, without even including the value of natural capital. Asked about that paper, Bastien-Olvera noted, “In my view, the fact that these two studies get to similar policy conclusions using two very different approaches definitely indicates the urgency of cutting emissions.”
Recent economics and climate science research findings consistently support more aggressive carbon emissions efforts consistent with the Paris climate targets.
Wesleyan University economist Gary Yohe, also not involved in the study, agreed that the new Nature Sustainability study “supports growing calls for aggressive near-term mitigation.” Yohe said the paper “provides added support to the notion that climate risks to natural capital are important considerations, especially in calibrating the climate risk impacts of all sorts of regulations like CAFE standards.”
But Yohe said he believes that considering the risks to unique and threatened systems at higher temperatures makes a more persuasive case for climate policy than just attempting to assess their economic impacts. In a recent Nature Climate Change paper, Kaufman and colleagues similarly suggested that policymakers should select a net-zero emissions target informed by the best available science and economics, and then use models to set a carbon price that would achieve those goals.
Their study estimated that to reach net-zero carbon pollution by 2050, the U.S. should set a carbon price of about $50 per ton in 2025, rising to $100 per ton by 2030. However climate damages are evaluated, whether through a more complete economic accounting of adverse impacts or via risk-based assessments of physical threats to ecological and human systems, recent economics and climate science research findings consistently support more aggressive carbon emissions efforts consistent with the Paris climate targets.
Whether you’re a remote work booster or a skeptic, there are lots of unanswered questions about what happens next for remote work, especially as Covid-19 restrictions continue to fade and as fears of a recession loom.
How many people are going to work remotely in the future, and will that change in an economic downturn? Will remote work affect their chances of promotion? What does it mean for where people live and the offices they used to work in? Does this have any effect on the majority of people who don’t get to work remotely? If employees don’t have to work in person to be effective, couldn’t their jobs be outsourced?
It turns out there’s a dangerous line between arguing for remote work and arguing yourself out of a job. And since remote work makes employees less visible, they will have to find other ways to let higher-ups know they exist or risk being passed over for pay raises. Remote work will also have long-lasting effects on the built environment, requiring office owners to renovate and allowing employees the potential for a higher quality of living. Finally, what happens during a recession largely depends on whether your company decides to save money by reducing real estate or laying off the employees they never met.
One thing that’s clear is that remote work is not going away. There are, however, a number of ways to make it better and more commonplace, and to ensure that it doesn’t harm you more than it helps.
To get a better idea of what could be coming, we asked some of the most informed remote work thinkers — people who study economics, human resources, and real estate — to make sense of what to expect in the future of remote work. Their answers, edited for length and clarity, are below.
Five years from now, what percentage of the US population will work remotely?
Johnny Taylor Jr., president and CEO of the Society for Human Resource Management: I think that number will never exceed 30 percent fully remote. What percentage will have some remote work? Probably 60 to 65 percent. There are some roles that will never be remote. But even in retail, employers are trying to figure out how to give that worker population some ability to work remotely. One retail company I talked with is going to make it so that the people who work in the store five days a week now do one day a week in customer service remotely.
Nicholas Bloom, economics professor at Stanford University, co-founder of WFH Research: Currently, 10 percent of the US workforce are fully remote and 35 percent are hybrid remote. In five years, I think both numbers will be pretty similar. Pushing this up is continued technological improvements in working-from-home technology. Pushing this down is the pandemic ebbing from memory.
Julie Whelan, global head of occupier research at Coldwell Banker Richard Ellis: The last few years has proven that people are able to work remotely. Now, we are trying to mix a combination of in-person and remote work — that is where the challenges shine. I am not convinced we will see a large jump in fully remote work; I think jobs that are fully remote will always remain the minority.
What has to change for more people to be able to work remotely?
Matthew Kahn, economics professor at the University of Southern California and author of Going Remote: How the Flexible Work Economy Can Improve Our Lives and Our Cities: Firms must have clear performance metrics — ideally ones that can be verified using quantitative data, so that remote workers understand in real time how they are performing. Firms must also figure out how to configure “virtual watercooler” interactions so that remote workers are less likely to feel like they are out of the loop.
Arpit Gupta, assistant professor of finance at New York University Stern School of Business: Companies need to have better ways to onboard new workers and get them involved in corporate culture. They also need to improve remote workers’ ability to connect with different parts of the organization and create better ways to manage new idea generation and creativity. Finally, they need to ensure improved promotion prospects for purely remote workers and the ability to go completely remotely from one firm to another.
Bloom: The main driver of working from home is whether it makes business sense for the organization, and if employees are happy doing this. This is driven by technology and the job task. Over time the technology is slowly improving to support working from home. I have been working on this topic for almost 20 years, and the changes over that period have been incredible.
Twenty years ago, working from home meant telephone calls and emailing or mailing small files. Now it’s all video calls and the cloud. Within 10 years, I predict new major technologies will arise to make this far better. In terms of job tasks, these are also changing to support working from home. For example, my neighbor is a doctor and pre-pandemic was in the office every day, but now sees patients remotely two days a week, as her job tasks now include televisits.
Taylor: We as management have to get comfortable with a total paradigm shift. We constantly say, “That can’t happen.” And the fact of the matter is we have to be willing to challenge our notions of what can’t happen and say, “Can it?” We’re in this dynamic stage where we’re determining whether or not it works. So the question, “Can you work remotely?” is really not the question. Is it possible? Yes, during the pandemic we proved that it’s possible. The question is, will there be trade-offs?
How might remote work affect jobs that aren’t remote?
Gupta: Changing consumption patterns will create more demand for goods and services — and the people who provide them — in the suburbs and remote-friendly destinations, relative to office central business districts in current metropolitan areas.
Bloom: Many non-remote jobs interact with remote workers. Think of retail and food service workers in city centers. If office employees move to remote work, these service workers have to change their location of work, too.
Taylor: More jobs might become partially remote. For a nurse, we’ll give them three days in the hospital and two days as a tele-nurse. So we are thinking a sharing of responsibilities to get to hybrid, even in those roles that absolutely, at the end of the day, largely have to be in person.
Will remote workers find it harder to advance than their in-person colleagues?
Taylor: Yes, point-blank. More than two-thirds of supervisors (67 percent) consider remote workers more easily replaceable than onsite workers, and 62 percent believe fully remote work is detrimental to employees’ career objectives. Managers acknowledged that when they are looking to give an assignment, they oftentimes forget the remote worker. Proximity matters.
Something that is of particular importance to me as an African American is, for years, we argued that we weren’t able to build relationships with the majority community. We didn’t have access to them and therefore visibility. Well, you really lose access and visibility if you’re at home and they’re in the office.
“Working remotely significantly reduces your opportunities to build relationships with people who can influence your career”
I’ve heard this argument that office culture is a white male-dominated relic of the past. That might be. But as long as those white males are in the office making decisions about who is going to be promoted, then you are very likely putting yourself at a disadvantage. It’s not a question of, is that right or wrong, fair or not. It’s just what it is. Working remotely significantly reduces your opportunities to build relationships with people who can influence your career.
Whelan: There is a risk that those people who get more face time are naturally at an advantage to advance faster than others. However, if an organization truly supports flexible work, then behavior around promotions and compensation gains needs to be discussed early, observed closely, and action should be taken if desired outcomes are not met. Just because people may work remotely some of the time — or all of the time, depending on company policy — that doesn’t mean they cannot be visible. So it is incumbent on everyone, including the employee themselves, to make sure people remain visible, front-of-mind, and reviewed based on job performance despite a remote status.
Kahn: The answer to this key question hinges on whether a given firm promotes based on a type of nepotism or based on objective value added to the firm’s core goals. Face-to-face interaction does build up trust and friendship. If bosses play favorites, then the remote workers will have a disadvantage in getting promoted. Those bosses who seek to promote based on a meritocratic criteria will emphasize the value of the quality of face-to-face interactions over the quantity of face-to-face interaction at work. Such an emphasis of quality over quantity of face-to-face interaction will alleviate concerns that remote workers are second-class citizens, as they may visit the headquarters just a few days a month.
Those firms that figure out these new work configurations will have an edge in attracting and retaining a more diverse workforce.
Bloom: Fully remote workers may find slow career progression, particularly those who are early in their careers. As individuals advance in their careers, however, personal mentoring becomes somewhat less important. It is also worth noting most remote workers in the US are not fully remote. They are mostly hybrid, coming into the office for three days a week on average, and as such, they get a good dose of personal interaction. So, yes, fully remote workers may face some career advancement costs, but hybrid workers likely will face little or no costs.
What’s going to happen to all the offices?
Whelan: Offices will still exist — they will just evolve. The most sought-after locations, the most desirable amenities, and the most productive space design will continue to morph as population migration and work patterns settle into a new place. The workplace today is anywhere you have a mobile device and an internet connection. But the physical office as a place to gather, innovate, and connect cannot easily be replaced.
Bloom: In the short run, not much. The reason is scheduling. Most firms are either letting employees choose their working-from-home days, which typically means Monday and Friday, or are scheduling teams or the whole firm to come in on the same days, often Tuesday, Wednesday, and Thursday. As such, they cannot cut space. Nobody sublets an office on Monday and Friday.
In the longer run, clever scheduling software, like Kadence, will organize teams and working groups to come in on different days: Say the industrials team is in the office on Monday and Tuesday, and the residential team on Wednesday and Thursday. But from talking to hundreds of firms, this is probably some years away from being a major reality. Until that time, office demand will be soft but won’t see major drops.
If you want to look for big impacts on real estate, then focus on city center retail. With office workers working from home about 50 percent of days, retail expenditure in central New York, San Francisco, and other big cities has collapsed, and that retail spending, jobs, and space is moving out to the suburbs.
Kahn: In high-quality-of-life cities, these commercial buildings will be converted into housing as well as schools and centers for our population’s aging senior citizens.
Taylor: There is no question that we’re going to have less demand for the traditional office space. Will it go away? No.
To what extent will remote work affect where people live?
Daryl Fairweather, chief economist at Redfin: Remote work is already affecting where people live. A record nearly one-third of homebuyers looked to relocate out of their home metro in the second quarter of 2022. That’s up from roughly 26 percent before the pandemic. Many people who have the flexibility to move have been doing so during the pandemic, often taking their higher housing budgets with them and, in turn, contributing to higher home prices in the places they’re moving. Nowhere is this more pronounced than in popular Sunbelt cities like Phoenix, Miami, and Austin, which have seen a surge of in-migration from more expensive coastal metros like NYC, San Francisco, and Seattle.
Taylor: We are absolutely seeing people move further away. Hell, I’ve even seen people who have to be in-office two days a week say, “Hey, I live in a totally different city, and I can commute in.” So I can live in Atlanta, work in Washington, DC, buy a plane ticket for those two days, get a hotel, and the math says it’s actually cheaper and better for me to live where I want to live and commute — even if the company doesn’t pay for it, because I don’t have to pay for housing in DC.
Kahn: In expensive superstar cities, working-from-home workers will be more likely to move to the suburban fringe, where land is cheaper and the homes are newer. Remote workers will also seek out beautiful areas that offer them the leisure opportunities they desire. Real estate prices in Santa Barbara, California, have boomed since March 2020 due to its beauty and its proximity to Los Angeles. Perhaps surprisingly, medium-size cities such as Baltimore will gain. Located along the Amtrak Corridor, Baltimore offers easy access to Washington, DC, New York City, and Philadelphia and features much lower housing prices.
How will it affect pay?
Fairweather: Some companies are localizing pay for their workers who relocate and work remotely, but plenty are letting remote workers keep their high salaries. The biggest winners will be coastal workers who move to more affordable places and maintain their salary. They’ll find their money goes much further, not just for housing but for other goods and services.
The biggest losers are people already living in popular migration destinations who may not have the option to move somewhere less expensive, and whose salaries may not go as far as they once did, thanks to both higher inflation and rising home prices in their area. However, some people living in popular migration destinations may be happy that their home values have increased and their local businesses have more high-earning customers.
Bloom: Working from home is a perk, so it means any individual firm offering hybrid-WFH can pay about 5 to 10 percent less. But, of course, there are also general equilibrium effects in that firms compete for talent in a labor market. If every firm offers working from home, no individual firm can cut pay without losing employees.
Will remote work cause companies to hire more contractors or more people outside the US?
Taylor: An employee came to me, and she made a really, really compelling case: “Johnny, I don’t need to come into the office.” She literally gave me a three-page memo making the case for why she could work remotely. And I smiled and said, “Be careful what you pray for. In the process of saying, ‘I don’t need to interact with other people, I’m an individual contributor,’ you’ve literally made the case that your job can be outsourced. And now I don’t have to cover your pension plan, I don’t have to deal with a salary increase every year, I don’t have to do any of that.” And guess what? I did exactly that. I outsourced that role.
Let’s face it, most of us could have a fully contracted environment, but what we want is a culture, people who have a long-term commitment. We want to build leadership; we need management. And we do that by having consistent relationships and getting to develop our people, so there’s a lot of upside to employing people internally and reasons that we don’t outsource. But there’s a lot of space between not doing it and doing a little bit.
Gupta: Yes, to both outside contractors and outside the US employees. But these workers will be more integrated into existing job functions and teams, rather than outsourcing entire processes.
Kahn: This offshoring is a serious possibility. Those firms that require some monthly face-to-face interaction at the corporate headquarters will be less likely to engage in offshoring.
Bloom: This is already happening, from what firms tell me. Anti-immigration policies initiated by Trump have accelerated this process by reducing the ability of foreign workers to migrate to the US. So dozens of firms have said if they can’t get workers to their jobs in the US, they will move their jobs abroad. Working from home has shown how easy it is to have fully remote employees and teams, and in an era of tight domestic labor markets with restricted immigration, moving jobs overseas is one common solution (the other being automation).
But I should point out currently that this is probably good for most US citizens. US labor markets are incredibly tight, generating painful inflation and shortages of goods and services. Try taking a flight, booking a restaurant meal, or hiring a contractor. It is extremely hard, as there is too much demand for labor right now. So having some foreign workers fill that gap in is good news. Of course, if the US hits a hard recession and unemployment rises drastically, that benefit will be less clear.
What will happen to remote work in a recession?
Gupta: I actually suspect remote work will increase. While firms have bargaining power against employees, they mostly want to cut costs like real estate leases, pushing people remote.
Firms are also less interested in onboarding new employees into corporate culture and long-term innovation — two important use cases for the office. It’s more about keeping things going, which can be handled by existing workers at home.
Kahn: Scenario 1: The boss has discretion over who to fire and is more likely to fire the remote worker, because the boss doesn’t really know this worker and hasn’t built up a friendship with the worker.
Scenario 2: Since remote workers do not bear a fixed daily cost of commuting to the office, such workers can more easily reduce their hours to meet the firm’s new demand for labor. In this case, remote workers may be less likely to be fired.
Taylor: Reversing this — putting this genie back in the bottle — is not going to happen. What I think is more likely to happen during a recession is that productivity will become even more important. And so then you will see employers looking really, really hard at the data because they’re going to have to make choices between employee A and employee B. And so employees who are more productive and more efficient are the people who are going to make it through.
Fairweather: Historically, recessions have lasted longer because it takes time for workers to move to job opportunities. If a salesperson in Cleveland lost her job, she may have had to move to San Francisco to find another sales job. But with remote work, you can do a sales job from anywhere. Hopefully this recession is shorter than historical recessions because of remote work.
The story of the modern web is often told through the stories of Google, Facebook, Amazon. But eBay was the first conqueror. One weekend in September 1995, a software engineer made a website. It wasn’t his first. At 28, Pierre Omidyar had followed the standard accelerated trajectory of Silicon Valley: he had learned to code in seventh grade, and was on track to becoming a millionaire before the age of 30, after having his startup bought by Microsoft. Now he worked for a company that made software for handheld computers, which were widely expected to be the next big thing.
But in his spare time, he liked to tinker with side projects on the internet. The idea for this particular project would be simple: a website where people could buy and sell. Buying and selling was still a relatively new idea online. In May 1995, Bill Gates had circulated a memo at Microsoft announcing that the internet was the company’s top priority. In July, a former investment banker named Jeff Bezos launched an online storefront called Amazon.com, which claimed to be “Earth’s biggest bookstore”. The following month, Netscape, creator of the most popular web browser, held its initial public offering (IPO).
By the end of the first day of trading, the company was worth almost $3bn – despite being unprofitable. Wall Street was paying attention. The dot-com bubble was starting to inflate. If the internet of 1995 inspired dreams of a lucrative future, the reality ran far behind. The internet may have been attracting millions of newcomers – there were nearly 45 million users in 1995, up 76% from the year before – but it wasn’t particularly user-friendly. Finding content was tricky: you could wander from one site to another by following the tissue of hyperlinks that connected them, or page through the handmade directory produced by Yahoo!, the preferred web portal before the rise of the modern search engine.
And there wasn’t much content to find: only 23,500 websites existed in 1995, compared to more than 17m five years later. Most of the sites that did exist were hideous and barely usable. But the smallness and slowness of the early web also lent it a certain charm. People were excited to be there, despite there being relatively little for them to do. They made homepages simply to say hello, to post pictures of their pets, to share their enthusiasm for Star Trek. They wanted to connect. Omidyar was fond of this form of online life. He had been a devoted user of the internet since his undergraduate days, and a participant in its various communities. He now observed the rising flood of dot-com money with some concern.
The corporations clambering on to the internet saw people as nothing more than “wallets and eyeballs”, he later told a journalist. Their efforts at commercialisation weren’t just crude and uncool, they also promoted a zombie-like passivity – look here, click here, enter your credit card number here – that threatened the participatory nature of the internet he knew. “I wanted to do something different,” Omidyar later recalled, “to give the individual the power to be a producer as well as a consumer.” This was the motivation for the website he built in September 1995. He called it AuctionWeb. Anyone could put up something for sale, anyone could place a bid, and the item went to the highest bidder. It would be a perfect market, just like you might find in an economics textbook.
Through the miracle of competition, supply and demand would meet to discover the true price of a commodity. One precondition of perfect markets is that everyone has access to the same information, and this is exactly what AuctionWeb promised. Everything was there for all to see. The site grew quickly. By its second week, the items listed for sale included a Yamaha motorcycle, a Superman lunchbox and an autographed Michael Jackson poster. By February 1996, traffic had grown brisk enough that Omidyar’s web hosting company increased his monthly fee, which led him to start taking a cut of the transactions to cover his expenses. Almost immediately, he was turning a profit. The side project had become a business.
But the perfect market turned out to be less than perfect. Disputes broke out between buyers and sellers, and Omidyar was frequently called upon to adjudicate. He didn’t want to have to play referee, so he came up with a way to help users work it out themselves: a forum. People would leave feedback on one another, creating a kind of scoring system. “Give praise where it is due,” he said in a letter posted to the site, “make complaints where appropriate.” The dishonest would be driven out, and the honest would be rewarded – but only if users did their part. “This grand hope depends on your active participation,” he wrote.
The value of AuctionWeb would rely on the contributions of its users. The more they contributed, the more useful the site would be. The market would be a community, a place made by its members. They would become both consumers and producers, as Omidyar hoped, and among the things they produced would be the content that filled the site. By the summer of 1996, AuctionWeb was generating $10,000 a month. Omidyar decided to quit his day job and devote himself to it full-time. He had started out as a critic of the e-commerce craze and had ended up with a successful e-commerce company. In 1997, he renamed it eBay. Ebay was one of the first big internet companies. It became profitable early, grew into a giant of the dot-com era, survived the implosion of the dot-com bubble, and still ranks among the largest e-commerce firms in the world.
But what makes eBay particularly interesting is how, in its earliest incarnation, it anticipated many of the key features that would later define the phenomenon commonly known as the “platform”. Ebay wasn’t just a place where collectors waged late-night bidding wars over rare Beanie Babies. In retrospect, it also turned out to be a critical hinge in the history of the internet. Omidyar’s site pioneered the basic elements that would later enable Google, Facebook and the other tech giants to unlock the profit potential of the internet by “platformising” it.
None of the metaphors we use to think about the internet are perfect, but “platform” is among the worst. The term originally had a specific technical meaning: it meant something that developers build applications on top of, such as an operating system. But the word has since come to refer to various kinds of software that run online, particularly those deployed by the largest tech firms. The scholar Tarleton Gillespie has argued that this shift in the use of the word “platform” is strategic. By calling their services “platforms”, companies such as Google can project an aura of openness and neutrality. They can present themselves as playing a supporting role, merely facilitating the interactions of others.
Their control over the spaces of our digital life, and their active role in ordering such spaces, is obscured. “Platform” isn’t just imprecise. It’s designed to mystify rather than clarify. A more useful metaphor for understanding the internet, one that has guided its architects from the beginning, is the stack. A stack is a set of layers piled on top of one another. Think of a house: you have the basement, the first floor, the second floor and so on, all the way up to the roof. The things that you do further up in a house often depend on systems located further down. If you take a shower, a water heater in the basement warms up the cold water being piped into your house and then pipes it up to your bathroom.
The internet also has a basement, and its basement also consists largely of pipes. These pipes carry data, and everything you do further up the stack depends on these pipes working properly. Towards the top of the stack is where the sites and apps live. This is where we experience the internet, through the pixels of our screens, in emails or tweets or streams. The best way to understand what happens on these sites and apps – on what tech companies call “platforms” – is to understand them as part of the broader story of the internet’s privatisation.
The internet started out in the 1970s as an experimental technology created by US military researchers. In the 80s, it grew into a government-owned computer network used primarily by academics. Then, in the 90s, privatisation began. The privatisation of the internet was a process, not an event. It did not involve a simple transfer of ownership from the public sector to the private, but rather a more complex movement whereby corporations programmed the profit motive into every level of the network. A system built by scientists for research was renovated for the purpose of profit maximisation. This took hardware, software, legislation, entrepreneurship. It took decades. And it touched all of the internet’s many pieces.
The process of privatization started with the pipes, and then worked its way up the stack. In April 1995, only five months before Omidyar made the website that would become eBay, the government allowed the private sector to take over control of the network’s plumbing. Households and businesses were eager to get online, and telecoms companies made money by helping them access the internet. But getting people online was a small fraction of the system’s total profit potential. What really got investors’ capital flowing was the possibility of making money from what people did online. In other words, the next step was figuring out how to maximize profit in the upper floors, where people actually use the internet. The real money lay not in monetizing access, but in monetizing activity.
This is what Omidyar did so effectively when he created a place where people wanted to buy and sell goods online, and took a cut of their transactions. The dot-com boom began with Netscape’s explosive IPO in August 1995. Over the following years, tens of thousands of startups were founded and hundreds of billions of dollars were invested in them. Venture capital entered a manic state: the total amount of US venture-capital investment increased more than 1,200% from 1995 to 2000. Hundreds of dot-com companies went public and promptly soared in value: at their peak, technology stocks were worth more than $5tn.
When eBay went public in 1998, it was valued at more than $2bn on the first day of trading; the continued ascent of its stock price over the next year made Omidyar a billionaire. Yet most of the startups that attracted huge investment during these years didn’t actually make money. For all the hype, profits largely failed to materialize, and in 2000 the bubble burst. From March to September, the 280 stocks in the Bloomberg US Internet Index lost almost $1.7tn. “It’s rare to see an industry evaporate as quickly and completely,” a CNN journalist remarked. The following year brought more bad news. The dot-com era was dead.
Today, the era is typically remembered as an episode of collective insanity – as an exercise in what Alan Greenspan, during his contemporaneous tenure as chair of the Federal Reserve, famously called “irrational exuberance”. Pets.com, a startup that sold pet supplies online, became the best-known symbol of the period’s stupidity, and a touchstone for retrospectives ever since. Never profitable, the company spent heavily on advertising, including a Super Bowl spot; it raised $82.5m in its IPO in February 2000 and imploded nine months later.
Arrogance, greed, magical thinking and bad business decisions all contributed to the failure of the dot-com experiment. Yet none of these were decisive. The real problem was structural. While their investors and executives probably wouldn’t have understood it in these terms, dot-com companies were trying to advance the next stage of the internet’s privatisation – namely, by pushing the privatization of the internet up the stack. But the computational systems that could make such a push feasible were not yet in place. Companies still struggled to turn a profit from user activity.
In his analysis of capitalist development, Karl Marx drew a distinction between the “formal” and “real” subsumption of labour by capital. In formal subsumption, an existing labour process remains intact, but is now performed on a capitalist basis. A peasant who used to grow his own food becomes a wage labourer on somebody else’s farm. The way he works the land stays the same. In real subsumption, by contrast, the labour process is revolutionised to meet the requirements of capital. Formerly, capital inherited a process; now, it remakes the process. Our agricultural worker becomes integrated into the industrialised apparatus of the modern factory farm.
The way he works completely changes: his daily rhythms bear little resemblance to those of his peasant predecessors. And the new arrangement is more profitable for the farm’s owner, having been explicitly organised with that end in mind. This is a useful lens for thinking about the evolution of the internet, and for understanding why the dot-coms didn’t succeed. The internet of the mid-to-late 1990s was under private ownership, but it had not yet been optimised for profit. It retained too much of its old shape as a system designed for researchers, and this shape wasn’t conducive to the new demands being placed on it. Formal subsumption had been achieved, in other words, but real subsumption remained elusive.
Accomplishing the latter would involve technical, social and economic developments that made it possible to construct new kinds of systems. These systems are the digital equivalents of the modern factory farm. They represent the long-sought solution to the problem that consumed and ultimately defeated the dot-com entrepreneurs: how to push privatisation up the stack. And eBay offered the first glimpse of what that solution looked like.Ebay enlisted its users in its own creation. They were the ones posting items for sale and placing bids and writing feedback on one another in the forum. Without their contributions, the site would cease to exist.
Omidyar was tapping into a tradition by setting up eBay in this way. In 1971, a programmer named Ray Tomlinson invented email. This was before the internet existed: Tomlinson was using its precursor, Arpanet, a cutting-edge network that the Pentagon created to link computers across the country. Email became wildly popular on Arpanet: just two years after its invention, a study found that it made up three-quarters of all network traffic. As the internet grew through the 1980s, email found an even wider reach. The ability to exchange messages instantaneously with someone far away was immensely appealing; it made new kinds of collaboration and conversation possible, particularly through the mailing lists that formed the first online communities.
Email was more than just a useful tool. It helped humanize the internet, making a cold assemblage of cables and computers feel inhabited. The internet was somewhere you could catch up with friends and get into acrimonious arguments with strangers. It was somewhere to talk about politics or science fiction or the best way to implement a protocol. Other people were the main attraction. Even the world wide web was made with community in mind. “I designed it for a social effect – to help people work together,” its creator, Tim Berners-Lee, would later write.
Community is what Omidyar liked best about the internet, and what he feared the dot-com gold rush would kill. He wasn’t alone in this: one could find dissidents railing against the forces of commercialisation on radical mailing lists. But Omidyar was no anti-capitalist. He was a libertarian: he believed in the liberating power of the market. He didn’t oppose commercialisation as such, just the particular form it was taking. The companies opening cheesy digital storefronts and plastering the web with banner ads were doing commercialisation poorly. They were treating their users as customers. They didn’t understand that the internet was a social medium.
Ebay, by contrast, would be firmly rooted in this fact. From its first days as AuctionWeb, the site described itself as a community, and this self-definition became integral to its identity and to its operation. For Omidyar, the point wasn’t to defend the community from the market but rather to recast the community as a market – to fuse the two. No less a figure than Bill Gates saw the future of the internet in precisely these terms. In 1995, the same year that Omidyar launched AuctionWeb, Gates co-authored a book called The Road Ahead. In it, the Microsoft CEO laid out his vision for the internet as “the ultimate market”: “It will be where we social animals will sell, trade, invest, haggle, pick stuff up, argue, meet new people, and hang out.
Think of the hustle and bustle of the New York Stock Exchange or a farmers’ market or of a bookstore full of people looking for fascinating stories and information. All manner of human activity takes place, from billion-dollar deals to flirtations.” Here, social relationships have merged so completely with market relationships as to become indistinguishable. The internet is the instrument of this union; it brings people together, but under the sign of capital. Gates believed his dream was at least a decade from being realised. Yet by the time his book came out, AuctionWeb was already making progress toward achieving it.
Combining the community with the market was a lucrative innovation. The interactions that occurred in the guise of the former greatly enhanced the financial value of the latter. Under the banner of community, AuctionWeb’s buyers and sellers were encouraged to perform unpaid activities that made the site more useful, such as rating one another in the feedback forum or sharing advice on shipping. And the more people participated, the more attractive a destination it became. More people using AuctionWeb meant more items listed for sale, more buyers bidding in auctions, more feedback posted to the forum – in short, a more valuable site.
This phenomenon – the more users something has, the more valuable it becomes – is what economists call network effects. On the web, accommodating growth was fairly easy: increasing one’s hosting capacity was a simpler and cheaper proposition than the brick-and-mortar equivalent. And doing so was well worth it because, at a certain size, network effects locked in advantages that were hard for a competitor to overcome. A second, related strength was the site’s role as a middleman. In an era when many dot-coms were selling goods directly – Pets.com paid a fortune on postage to ship pet food to people’s doors – Omidyar’s company connected buyers and sellers instead, and pushed the cost of postage on to them.
This enabled it to profit from users’ transactions while remaining extremely lean. It had no inventory, no warehouses – just a website. But AuctionWeb was not only a middleman. It was also a legislator and an architect, writing the rules for how people could interact and designing the spaces where they did so. This wasn’t in Omidyar’s plan. He initially wanted a market run by its members, an ideal formed by his libertarian beliefs. His creation of the feedback forum likely reflected an ideological investment in the idea that markets were essentially self-organising, as much as his personal interest in no longer having to mediate various disputes.
Contrary to libertarian assumptions, however, the market couldn’t function without the site’s ability to exercise a certain kind of sovereignty. The feedback forum is a good example: users started manipulating it, leaving praise for their friends and sending mobs of malicious reviewers after their enemies. The company would be compelled to intervene again and again. It did so not only to manage the market but also to expand it by attracting more buyers and sellers through new categories of goods and by expanding into new countries – an imperative that shareholders imposed after eBay went public in 1998.
“Despite its initial reluctance, the company stepped increasingly into a governance role,” writes the sociologist Keyvan Kashkooli, in his study of eBay’s evolution. Increasing profitability required managing people’s behaviour, whether through the code that steered them through the site or the user agreements that governed their activities on it. Thanks to network effects, and its status as both middleman and sovereign, eBay easily turned a profit. When the crash of 2000–01 hit, it survived with few bruises. And in the aftermath of the crash, as an embattled industry, under pressure from investors, tried to reinvent itself, the ideas that it came up with had much in common with those that had formed the basis for eBay’s early success.
For the most part, eBay’s influence was neither conscious nor direct. But the affinities were unmistakable. Omidyar’s community market of the mid-1990s was a window into the future. By later standards it was fairly primitive, existing as it did within the confines of an internet not yet remodelled for the purpose of profit maximisation. But the systems that would accomplish that remodelling, that more total privatisation of the internet, would do so by elaborating the basic patterns that Omidyar had applied. These systems would be called platforms, but what they resembled most were shopping malls.
The first modern shopping mall was built in Edina, Minnesota, in 1956. Its architect, Victor Gruen, was a Jewish socialist from Vienna who had fled the Nazis and disliked American car culture. He wanted to lure midcentury suburbanites out of their Fords and into a place that recalled the “rich public social life” of a great European city. He hoped to offer them not only shops but libraries and cinemas and community centres. Above all, his mall would be a space for interaction: an “outlet for that primary human instinct to mingle with other humans”. Unlike in a city, however, this mingling would take place within a controlled setting. The chaos of urban life would be displaced by the discipline of rational design.
As Gruen’s invention caught on, the grander parts of his vision fell away. But the idea of an engineered environment that paired commerce with a public square remained. Gruen’s legacy would be a kind of capitalist terrarium, nicely captured by what urban planners call a “privately owned public space”. The systems that dominate life at the upper end of the stack are best understood, to borrow an insight from the scholar Jathan Sadowski, as shopping malls. The shopping malls of the internet – Google, Facebook, Amazon – are nothing if not privately owned public spaces. Calling themselves platforms, they are in fact corporate enclosures, with a wide range of interactions transpiring inside of them.
Just like in a real mall, some of these interactions are commercial, such as buying clothes from a merchant, while others are social, such as hanging out with friends. But what distinguishes the online mall from the real mall is that within the former, everything one does makes data. Your clicks, chats, posts, searches – every move, however small, leaves a digital trace. And these traces present an opportunity to create a completely new set of arrangements. Real malls are in the rental business: the owner charges tenants rent, essentially taking a slice of their revenues. Online malls can make money more or less the same way, as eBay demonstrated early on, by taking a cut of the transactions they facilitate.
But, as Sadowski points out, online malls are also able to capture another kind of rent: data rent. They can collect and make money from those digital traces generated by the activities that occur within them. And since they control every square inch of the enclosure, and because modifying the enclosure is simply a matter of deploying new code, they can introduce architectural changes in order to cause those activities to generate more traces, or traces of different kinds. These traces turn out to be very valuable. So valuable, in fact, that amassing and analysing them have become the primary functions of the online mall. Like Omidyar’s community market, the online mall facilitates interactions, writes the rules for those interactions, and benefits from having more people interacting with one another.
But in the online mall, these interactions are recorded, interpreted and converted into money in a range of ways. Data can help sell targeted advertising. It can help build algorithmic management systems that siphon more profit out of each worker. It can help train machine learning models in order to develop and refine automated services like chatbots, which can in turn reduce labour costs and open new revenue streams. Data can also sustain faith among investors that a tech company is worth a ton of money, simply because it has a ton of data. This is what distinguishes online malls from their precursors: they are above all designed for making, and making use of, data. Data is their organizing principle and essential ingredient.
Data is sometimes compared to oil, but a better analogy might be coal. Coal was the fuel that powered the steam engine. It propelled the capitalist reorganization of manufacturing from an artisanal to an industrial basis, from the workshop to the factory, in the 19th century. Data has played a comparable role. It has propelled the capitalist reorganization of the internet, banishing the remnants of the research network and perfecting the profit engine. Very little of this vastly complex machinery could be foreseen from the vantage point of 1995.
But the arrival of AuctionWeb represented a large step toward making it possible. The story of the modern internet is often told through the stories of Google, Facebook, Amazon and the other giants that have come to conquer our online life. But their conquests were preceded and prefigured by another, one that started as a side project and stumbled into success by coming up with the basic blueprint for making a lot of money on the internet.