COVID-19, Pandemics, Spanish Flu

Spanish Flu on the cusp of no longer being the reference point for modern pandemic plague



Some 18 months after the World Health Organization (WHO) declared the coronavirus COVID-19 to be a global pandemic on March 11, 2020, the world stands on the cusp of it replacing the Spanish Flu influenza pandemic of January 1918 to December 1920 as the reference point – the benchmark, as it were – for measuring modern pandemic plague. That will occur very shortly as the United States crosses the threshold of 675,000 COVID-19 deaths in what is now the novel coronavirus’ fourth wave there; a toll that will then exceed that of the Spanish Flu of a century ago in America.

All pretty remarkable, since the name COVID-19 didn’t exist prior to Feb. 11, 2020 when the World Health Organization named what had been provisionally known as Novel Coronavirus 2019-nCoV and first reported from Wuhan, China on Dec. 31, 2019. COVID-19 is caused by the SARS-CoV-2 virus.

It is important to note the “in America” qualification. As Laura Spinney writes in her very timely 2017 book, Pale Rider: The Spanish Flu of 1918 and How It Changed the World, our picture of the Spanish Flu pandemic, beginning in the waning last months of the First World War, just over a 100 years ago, is very much a reflection of the North American and European influenza pandemic perspective and experience, rather than that of say, India, South Africa or Iran, although the Spanish Flu, named not for its country of origin but rather because wartime press censorship was more relaxed in neutral Spain than either the Central Powers or Allied Powers in 1918, allowing for more early news coverage of the illness, which within months swept the world, much like COVID-19.

While some 675,000 Americans died over three years between January 1918 and December 1920 during the three waves of the Spanish Flu pandemic, the country’s population was 103.2 million. Today, the population of the United States is more than 331 million. The world population in 1918 was about 1.8 billion, compared to about 7.8 billion people today.

Also, while global death toll estimates for the Spanish Flu pandemic are speculative to some extent, it is generally accepted it killed somewhere between 50 and 100 million people worldwide. COVID-19’s global death toll stands at about 4.7 million.


There are, of course, all kinds of similarities – and differences – between COVID-19 and the Spanish Flu pandemic: They are not the same type of virus; the former is a coronavirus, the latter an influenza virus. But compulsory masking as a public health-driven non-pharmaceutical intervention (NPI) has been similarly divisive in societies in both pandemics.

The rolling real time daily death count on the online COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU) in Baltimore functions as our equivalent of the Bulletin of the Atomic Scientists’ “Doomsday Clock,” circa 1947, and the clock itself, set at 100 seconds before midnight last Jan. 27, is being profoundly influenced by COVID-19.

“Founded in 1945 by Albert Einstein and University of Chicago scientists who helped develop the first atomic weapons in the Manhattan Project, the Bulletin of the Atomic Scientists created the Doomsday Clock two years later, using the imagery of apocalypse (midnight) and the contemporary idiom of nuclear explosion (countdown to zero) to convey threats to humanity and the planet,” writes John Mecklin, the editor-in-chief. “The Doomsday Clock is set every year by the Bulletin’s Science and Security Board in consultation with its Board of Sponsors, which includes 13 Nobel laureates. The Clock has become a universally recognized indicator of the world’s vulnerability to catastrophe from nuclear weapons, climate change, and disruptive technologies in other domains.”

The Center for Systems Science and Engineering, in the Department of Civil and Systems Engineering in the Whiting School of Engineering at Johns Hopkins University’s Latrobe Hall in Baltimore, launched its a tracking map website with an online dashboard for tracking the worldwide spread of what was then known as the Wuhan coronavirus (2019-nCoV) as it appeared to be spreading around the globe in real-time in January 2020.

Lauren Gardner, a civil engineering professor and CSSE’s co-director, spearheaded the effort to launch the mapping website. The site displays statistics about deaths and confirmed cases of COVID-19 across a worldwide map.

“We built this dashboard because we think it is important for the public to have an understanding of the outbreak situation as it unfolds with transparent data sources,” Gardner said when Hopkins launches it last year. “For the research community, this data will become more valuable as we continue to collect it over time.”

You can also follow me on Twitter at: https://twitter.com/jwbarker22

Standard
Popular Culture and Ideas

Dialing up the future: CompuServe, Tandy’s TRS-80 at RadioShack, and the San Jose Mercury News

The “digital divide” is a term usually used to characterize the gulf between those who have ready access to computers and the internet, and those who do not.

I like to think of it in a more archival sense with the digital divide being a demarcation line between online full-text access to today’s, yesterday’s, along with the year and decades before that’s newspapers, and a world, where even if we all are blessed with a plethora of computers and internet service providers, accessing those newspapers of yesteryear for free is in most cases next to impossible online, unless you are fortunate enough to have access to digitized older newspapers such as can be found at the Thompson Public Library: https://thompsonlibrary.insigniails.com/Library/Digital. Otherwise, archival newspaper research for 1978 still  means scouring bound volumes in a musty newspaper morgue or library, or spending hours in a dark cubicle with one’s head’s buried and eyes straining, spinning reel-after-reel of microfilm or sheet-after-sheet of microfiche.

Does it matter? I think it does. While I can call up verbatim copies of stories I’ve written for most newspapers since 2001, I cannot as easily access stories at a distance in space and time I wrote for the Peterborough Examiner back in 1985 on Paul Croft Jr., who had been a brilliant computer scientist in the late 1960s for Control Data in Minneapolis, but later, while suffering from paranoid delusions stemming from late onset schizophrenia, in 1972 shot and killed in a company parking lot in Canada a co-worker, after hearing voices telling him to do so.

Later, after being released from detention in a mental health institution, but having a relapse into more  mental illness again, largely triggered by not taking his anti-psychotic medications because of their unpleasant side effects, Croft wound up wounding two OPP Tactical Rescue Unit (TRU) officers in 1984, who had arrived at his home in a remote part of Haliburton County, Ontario to execute a warrant under the Ontario Mental Health Act, alleging he had breached the conditions of the lieutenant-governor’s warrant he was subject to, namely by not taking his prescribed meds. By the time I encountered Croft in October 1985, he was on trial in Lindsay, Ontario in what was then the Supreme Court of Ontario, being tried on two counts of attempted murder.

Croft shot the two officers with a high-powered rifle. Both, while injured, recovered and survived.

Again found not guilty by reason of insanity, Croft became among the rarest of the rare among what were then often referred to as the criminally insane: a man detained on not one, but two lieutenant governor’s warrants.

Ditto the 1987-88 series of stories I wrote for the paper on the so-called Peterborough Armouries Conspiracy, which had several dimensions, including a number of police investigations, involving civilian and military police, several court cases, two very tragic suicides, and finally a coroner’s inquest presided over by Ontario’s deputy chief coroner at the time.  Names like Andrew Webster, Ian Shearer, Jeffrey Atkinson,  Lloyd Jackson and Michael Noury have largely been lost in the pre-internet mists of time, recalled only if one happens to have a scrapbook of newspapers clippings, or access to bound volumes of the Peterborough Examiner or its microfilm for 1987-1988.

Without that kind of research access, 30 to 35 years after the events, one’s memories of such stories have a sort of sepia tone or looking through the glass dark quality to them. Although oddly enough you can find a good summary of the Peterborough Armouries Conspiracy story through a June 17, 1987 story headlined “Cyanide deaths a Peterborough nightmare” by Southam News reporter John Kessel, which appeared in among other places, the now Glacier Media-owned Prince George Citizen, which has digitized its older newspapers with the PDF available online at: http://pgnewspapers.pgpl.ca/fedora/repository/pgc%3A1987-06-17-24/PDF/Page%20PDF

I can almost tell you to the day in retrospect when I think the internet “arrived.”

When I arrived at Queen’s University in Kingston, Ontario as a history graduate student in September 1993, the main library was still Douglas Library on University Avenue, but across the street kitty-corner to it was a massive construction project where they were building the brand-new Stauffer Library on Union Street. This was the end of the brief five-year NDP Bob Rae era in Ontario and while the economy wasn’t strictly speaking in recession, it was far from booming, so projects of such scale in places like Kingston were rare.

I remember using an internet station in the just opened Stauffer Library the next year on one of my first visits in October 1994. The Netscape Navigator browser had just been released that same month, but Queen’s was using the NCSA Mosaic browser, released in 1993, almost the first graphical web browser ever invented. The computer services and library folks at Queen’s got it from day one to their credit. They knew this was going to be so popular with students instantly, the work stations (and there weren’t many) were designed for standing only. How many places in a university library is there no seating? Not many. But they wanted to keep people moving because there would be lineups to use the stations.

I also remember reading the San Jose Mercury News online because it was in Silicon Valley and one of the very first papers in North America online. Today its online archive goes back to June 1985. The funny thing is, the San Jose Mercury News recognized its brief moment in history and for a few years anyway punched well above its weight, doing fine investigative work, both in print and online; a small regional paper no one had ever heard of before the early 1990s unless they lived in Southern California. In its brief shining moment, the San Jose Mercury News had 400 people in its newsroom, revenues of $300 million and profit margins of more than 30 per cent, a bureau in Hanoi, and netted a Pulitzer Prize for foreign news.

In 1994, we all knew intuitively the world had changed with the internet and graphical web browsers. I had sent my first e-mail from Trent University in Peterborough more than three years earlier in the spring of 1991 from the Thomas J. Bata Library on their “Ivory”  server (someone in computer services seemingly had a sense of humour), and was also sitting down as I recall. That was neat, but this was on a whole other scale entirely.

I realized in July 1995, as I was finishing up writing America’s symbolic ‘Cordon sanitaire?’ Ideas, aliens and the McCarran-Walter Act of 1952 in the age of Reagan for my master’s thesis in 20th century American history on the admission of nonimmigrants to the United States, emigration and immigration policy, and foreign relations in Latin America between 1981 and 1989, that my class would likely be the last Queen’s University history class where students, including me, had few online citations in their footnotes or included in their bibliographies, and the style of such citations was still very much in development.

While the San Jose Mercury News is often thought of as pioneering in its online venture, the first newspaper to go online was The Columbus Dispatch in Ohio way back on July 1, 1980.  It was part of a unique CompuServe and Associated Press experiment about the potential of online papers. Eventually other AP member newspapers were part of the project, including the Washington Post, The New York Times, The Minneapolis Star Tribune, The San Francisco Chronicle, The San Francisco Examiner, the Los Angeles Times, The Virginian-Pilot, The Middlesex News, the St. Louis Post-Dispatch, and the Atlanta Journal-Constitution. But it was The Columbus Dispatch that published the first “online” newspaper when it began beaming news stories through the CompuServe dial-up service. The paper was the first daily in the United States to test a technology that enabled the day’s news to flow into home computers at 300 words per minute. Users paid $5 per hour for the service. “To become a subscriber,” the paper reported at the time, “a resident will have to have a home computer.  Such equipment is now available in electronics shops.” If you had Tandy’s TRS-80 from RadioShack, founded in 1921 as a mail-order retailer for amateur ham-radio operators and maritime communications officers on Brattle Street in Boston by two London-born brothers, Theodore and Milton Deutschmann, who named the company after the compartment that housed the wireless equipment for ham radios, and a modem with access to the online CompuServe dial-up service, you were ready to go, or at least until the pioneering online experiment ended in 1982.

Launched in November 1977, the $600 TRS-80 was one of the first mass-market personal computers with about 16K of memory and a 12-inch-square monitor with one shade of gray characters and no graphics, using software designed by a still obscure start-up named Microsoft, founded 2½ years earlier in April 1975 by Bill Gates and Paul Allen.

You can also follow me on Twitter at: https://twitter.com/jwbarker22

Standard
Spelling

Google Search is a writer’s friend: Primo spell checker

segg.png

For years now I’ve used Google Search as my go-to-spell checker on the internet for words that stump Microsoft Word’s spell checker (which is unfortunately a pretty low bar … “no spelling suggestions” and red underlined words are a pretty common occurrence. I may get one yet writing this sentence).

A spell checker is an application program that flags words in a document that may not be spelled correctly. Spell checkers may be stand-alone, capable of operating on a block of text, or as part of a larger application, such as a word processor, e-mail client, electronic dictionary, or search engine.

“The spell checker scans the text and extracts the words contained in it, comparing each word with a known list of correctly spelled words (i.e. a dictionary). This might contain just a list of words, or it might also contain additional information, such as hyphenation points or lexical and grammatical attributes,” Wikipedia tells me.

“An additional step is a language-dependent algorithm for handling morphology. Even for a lightly inflected language like English, the spell-checker will need to consider different forms of the same word, such as plurals, verbal forms, contractions, and possessives. For many other languages, such as those featuring agglutination and more complex declension and conjugation, this part of the process is more complicated.”

Most of the time I do know how to spell the word triggering the red alert, but even my largely two-index fingers typing has a tendency to overrun my typing on the page when I am composing something quickly in my head, as I write (err … type), and sometimes it is just as fast, when it is more than one word, to copy-and-paste the sentence into Google Search rather than to individually correct several suspect words. Sometimes, of course, I correct the word in Word just to make sure I really do remember how to spell it. Sort of like doing math in your head, or at least on paper with a pen or pencil, rather than using a calculator. We pretty much all figure we should be able to do those things manually; we just don’t want to overdo it.

This got me thinking the other day, wondering why Google Search is so much better at correcting my spelling in sentences, almost as an afterthought, while it completes a search that may or may not be additionally helpful in and of itself. Google Search will often finish a sentence correctly for me, even if I only paste or type a part of the sentence into the search box or bar.

My first hunch was that it had something to do with the vast amount of data Google Search processes with over three billion searches a day, and developing algorithms and other proprietary tools based on that.

My second hunch was that if I was pondering this other people have thought about it, researched it, and likely written about it before me.

My intuition for both hunches turned out to be correct.

Intuition, in fact, is what Google Search is all about. What makes it intuitive? Context. Context rules.

John Breeden II, the Washington, D.C. chief executive officer of Tech Writers Bureau, who formerly was the laboratory director and senior technology analyst for Government Computer News (GCN), where he reviewed thousands of products aimed at the U.S. federal government – everything from notebooks to high-end servers – and at the same time decoded highly technical topics for broad audiences, wrote about the topic in an Nov. 18, 2011 article for GCN.

“My biggest problem with Word is that there are some words that simply trip it up,” Breeden wrote. “When writing about temperature for our many rugged reviews, I always put ‘Farenheight,’ which Word thinks should be changed to ‘Fare height.’ That doesn’t help at all.

“However, when the same misspelled word is pasted into Google, it says, ‘showing results for Fahrenheit instead.’ There are quite a few other words that confuse Word but not Google. They are not difficult to find.

“I have to wonder why Google is so smart when it comes to figuring out what word a user wants to use. My guess is that the database Google is pulling from is so massive that it’s probably seen a lot of the same basic spelling mistakes. There are probably a lot of people who have wanted to search for Fahrenheit but typed in ‘Farenheight’ instead. Nice to know that I’ve got company.

“You would think it would be simple for word processors to use the same type of technology to improve their accuracy, but I suppose that would involve capturing data from their users and then making the connections between common mistakes and the accurate spelling.

“I thought that is what spell check was supposed to do, but instead I think it just matches the misspelling with words that are somewhat close to what you’ve typed. And Google obviously goes beyond that to associate common mistakes with actual words.”

An anonymous poster at Quora, a question-and-answer website where questions are asked, answered, edited and organized by its community of users, wrote on Sept. 1, 2012 in response to the question, “How is google so good at correcting spelling mistakes in searches?”:

“Google (search engines in general) has clusters processing tons (TB’s) query logs, which try to learn the transformation from original misspelled sentence to the corrected one. These transformation schemes are fed into the front end servers which serve the auto completion (and/or corrections to queries). “Also these servers have lot more processing power and memory and disk space of course will not be an issue at all (for the learned transformations). “Also since Google crawls the entire web regularly it will learn new words and suggest corrections Word can’t do till next release.”

Quora also aggregates questions and answers to topics.

“Desktop software usually have tight constraints on processing power, memory or disk space they could use to run compared to that of server based applications and usually are expected to keep the internet usage to a minimum (at least for MS Word.) “They use static resources (dictionary that might only be current at the time of launch) and can’t employ complex algorithms due to the above said restrictions and hence employ heuristic algorithms which may not [be] very predictive of the correct word.”

Cosmin Negruseri, vice-president of engineering at Addepar, an investment management technology company, formerly worked at Google (both companies are based in Mountain View in Santa Clara County, California) as an engineer, working on ads, search and Google Code Jam, an international programming competition hosted and administered by Google, replied the same day, writing: “The main insight in modern spell correctors is using context. For example New Yorp is a misspelling of New York with a high probability.”

 You can also follow me on Twitter at: https://twitter.com/jwbarker22

 

 

Standard