Wednesday, April 19, 2017

Re-tooling for Culture Wars 2.0
Mike Waggoner
Supper Club
April 18, 2017

What I am saying tonight is different from what I would have said some months ago—perhaps not so much different in substance, but rather in its urgency. Between then and now, of course, was the watershed event of the November 8 US election. This event, both the run up to it and the ensuing fallout from it, has affected me in unexpected ways, but primarily in my personal stance towards my work. Like several in the room, I am “of a certain age”—that time when some of my colleagues and friends are “heading for the exits,” either by way of retirement or the funeral pyre. It is about that time when some of us may be reaching what one scholar called the fourth stage of Hindu spiritual development—the one where the elders head out to the forest leaving the striving to those younger.
In fact, over the last couple years I have found myself thinking about stepping away from academe. I am tiring of the bureaucracy at my university. I am currently on my fifth president (not counting the four interims) and my fifth dean of our college. I have seen legislative wrangling toy with university budgets and have endured various management and teaching-learning fads as they sweep across the ever-shorter attention span of administration. I have, to be sure, continued to enjoy my remaining colleagues and students and particularly my study and writing. But I thought I could foresee the near-term trajectory of work in my field and those who could carry it on and that things would be fine without me. Of course, that is still the case.
But the 2016 election was a proverbial “wake-up call” for me. I, along with many others, clearly did not read the near-term trajectory of our politics accurately. We were not to enjoy the (assumedly) easy transition in administrations that would continue the albeit hard fought, but achievable progressive society we assumed was all our goal. “Cold water in the face,”, “slap in the face,” “fire lit under me”—whatever the simile, many of us woke that morning of the 9th needing to come to terms with a new reality. The mean-spirited, racist, xenophobic, misogynist (and we can go on) rhetoric of the campaign was rewarded with those attitudes seemingly sanctioned by the voting public. (At the same time, we can and should console ourselves that there were nearly 3 million more voters who opposed these views so we are in the majority--still we also know how our electoral system worked out). But what does this “wake-up call” mean for me, for any of us who share this sentiment.
This is pretty political, you may be saying to yourself, although up to this point I think I may be preaching to the choir. In our current larger societal context, however, I would argue it has never been more important in the history of our country and, indeed, the world for each of us to martial our knowledge, energy, and resources in the service of the public good. There is considerable experience and wisdom in this room that needs to be shared. As the Farmers’ insurance commercial reminds us: “we know a thing or two because we’ve seen a thing or two.” Whatever drift I may have been settling into regarding my own future work arrangements--that changed virtually overnight. I am awake and I say—to the barricades.
Now, before we all break into our favorite songs from Les Mis, we need to review the landscape for what this means for us, because however mad and determined to do something that we may be, we do operate in environments that shape, enable, and constrain, our activity. In my remarks tonight, I would like to talk about the challenges of the milieu in which we live and the idea of claiming and exercising our voice in this time.
A survey of 7000 first year college and university undergraduates in the US revealed that only 6 percent of them could name the 13 colonies and many of them thought the first president was Abraham Lincoln, who was also known for “emaciating the slaves.” This information was reported in a New York Times article--in 1943. In a similar survey done at the bicentennial, no improvement was shown. Current assessments continue to show a similar dismal trend of broad cultural ignorance.
This apparently continuing deficit in basic knowledge calls to mind an anniversary we can note that bears on this consideration of the current cultural milieu. This year is the 30th anniversary of the publication of Alan Bloom’s, The Closing of the American Mind, often referred to as the “opening shot in the culture wars.” Bloom, a distinguished University of Chicago political philosopher argued that the distinctive American character was being lost to a plethora of new and emerging “voices” parading under the banner of diversity. Education and the larger society were being eroded by competing (read “lesser”) works being admitted to the university curriculum while scaling back the traditional canon ridiculed as that of “dead white men.” This brief foregoing description risks caricaturing his argument; his work is complex and nuanced and deserves attention as a serious act of public scholarship, whether we hold it in high or low esteem.
Its publication proved wildly popular and produced a flurry of responses and companion pieces, perhaps predictably among them one called The Opening of the American Mind by historian Lawrence W. Levine, published in 1996. It was an articulate counter argument, one commentator saying that the book should “put an end to ‘culture war’ talk.” It neither gained the traction of Bloom’s book, nor settled the argument. Andrew Hartman produced an excellent 2015 history of the culture wars, A War for the Soul of America (the title taken from the battle cry of Pat Buchanan in his 1992 speech at the Republican National Convention). Hartman summarizes his argument this way:
This book gives the culture wars a history—because they are history.
The logic of the culture wars has been exhausted. The metaphor has run its course (p. 285).

I’m not so sure. The same year Hartman made this declaration, 2015, Mark Bauerlein and Adam Bellow published an edited volume entitled, The State of the American Mind, a collection of 15 essays essentially continuing Bloom’s argument, just updating it. Bauerlein is an academic—English professor at Emory--and Bellow is an executive in publishing. (Adam Bellow is also the son of Saul Bellow the noted novelist and University of Chicago professor who, coincidentally, wrote the foreword to Bloom’s book “back in the day”). To underscore its relationship to the earlier days of the culture wars, this latest salvo also features an introduction by the famous or infamous, again depending upon individual proclivities, E.D. Hirsch, Jr., author of Cultural Literacy: What Every American Needs to Know. Of course, both of these recent books (Bauerlein and Bellow, and Hartman) appeared before last year’s presidential election where the continuing divide in US culture was laid bare. If there was any doubt that the culture wars continue, there should not be now, all be they in mutated form. Unlike the Thirty Years religious wars of 17th century Europe, there does seem to be any corresponding Peace of Westphalia in sight for our 30 years’ culture wars. The following is some of why I think that is the case.
Andrew Hartman and others point out that in the nearly twenty years between the end of World War II and the election of John F. Kennedy, there coalesced a set of conservative cultural standards, “assumptions and aspirations shared by millions of Americans, that came to constitute a “normative America.” These standards included “hard work, personal responsibility, individual merit, delayed gratification, social mobility”. . . stringent sexual and gender expectations within heterosexual marriage, a consensus around white Judeo-Christian values, and a cohesiveness required in these norms deriving from a shared, perceived threat of Cold War and alien cultural and ideology (p. 5).
Hartman succinctly summarizes the transition ushered in by the upheavals that would occur in the 1960s: “The new America given life by the sixties—a more pluralistic, secular, more feminist America—was built on the ruins of normative America” (p. 6). His announcing the “ruins of normative America” to me was a bit like Mark Twain’s famous quip about rumors of his death being greatly exaggerated. This normative America, thought to be lost to the 60s, would begin to find its voice again in Richard Nixon’s 1969 reference to the “silent majority,” a phrase we heard resurrected nearly 50 years later in this past election cycle.
So, in our current cultural milieu we recognize a pervasive lack of basic knowledge thought to be necessary to viable citizenship. We further recognize a continuing 50 plus year old cultural divide between the world reacting to and emerging from 1960s America. There are two other elements in the environment, newer I think, that we should acknowledge and take into account as we assess our stance toward what we can do as individuals.
First, more than there being a continuing basic civic illiteracy, some argue that there is actually a “campaign against established knowledge,” to borrow a phrase from Tom Nichols new book, The Death of Expertise--something we have, again, seen come to the surface in the recent election cycle and continue through to the present. There has been proven distortion and misrepresentation on both sides, and even outright lies and entirely fabricated “fake news.” Some of the fall-out from all this showed up in a recent poll that found that 44% of Americans believed
that media made up stories and fabricated sources. (By the way, I hope no one here had anyone injured in the Bowling Green Massacre). But beyond that there is a deeper current in American culture that has been with us a very long time.
Richard Hofstadter argued this in his 1963 book Anti-intellectualism in American Life. Though the seeds of this attitude may be seen as early as Alexis de Tocqueville, in his 1835 and 1840 works Democracy in America, it was in the 1952 presidential election between Dwight D. Eisenhower and Adlai Stevenson that this epithet took hold and was reinforced and exacerbated during the McCarthy era in the 1950s. (So here we have another characteristic baked into “normative America”—anti-intellectualism.). With the election of Eisenhower, as Arthur Schlesinger, Jr. put it: “the New Dealers were replaced by the car dealers.” Schlesinger argued that the election brought on “the vulgarization which has been the almost invariable consequence of business supremacy.” He more pointedly, and provocatively, went on to say, “Anti-intellectualism has long been the anti-semitism of the businessman.” The mid-50s collapse of McCarthy, combined with the shock of the Sputnik launch, illuminating the shortcomings in American science, led to a brief resurgence of respect for the intellect that led into the 60s, though that respect was later to be tarnished by the “intellectuals’ war” in Viet Nam—engineered by the best and the brightest led by Robert McNamara (one of Steve Bannon’s favorite books by the way). In a more recent analysis of American anti-intellectualism, Susan Jacoby’s The Age of American Unreason brings current similar historical trends with current examples.
The argument here is that there is a basic tendency to rely on our own assessment of a situation over rational comment by another with presumed and credentialed expertise on the subject. This self-reliance may be, for example, because one had lost trust in so-called experts because some previous expert pronouncements have been off the mark. In Nichols book he argues that we have entered a new stage in this evolution, though the move is a matter of degree rather than kind. It is one brought on by the “Google-fueled, Wikipedia-based, blog sodden collapse of any division between professionals and laymen, students and teachers.” But there is one further element that threatens to further manipulate this, perhaps, “socially genetic” American condition of the distrust of expertise, and it is one enabled by our increasing technological sophistication which is being used to shape the information we receive even at a level of which we are unaware.
We are all familiar with this manipulation at a basic marketing level. We have all done searches on google, amazon, or whatever only to find later that ads for those items mysteriously pop up in our facebook feeds or other online sites we visit. We’re being tracked and profiled. But this surveillance and ensuing analysis has gone further, much further. A small US firm, Cambridge Analytica, spun off from the larger British data analytics firm SCL, specializes in “election management strategies” and “messaging and information operations.” SCL has refined their models over 25 years of military psychological operations (psyops) work in places like Afganistan and Pakistan. Through the use of sophisticated algorithms employed by artificial intelligence and using automated bots to rapidly and tirelessly examine hundreds and thousands of internet sites, these companies are coming to know our habits, emotional triggers, and subtle communication preferences of which we may not be aware.
As an example, SCL, the British parent company to Cambridge Analytica, built a psychometric model by creating a facebook quiz (admit it, some of us have taken one), getting a response rate of 6 million users, thereby producing a remarkable trove of data. They further found that by deploying the automated bots across the internet to correlate and corroborate patterns, they could, with 150 ‘likes’ on a facebook page, predict the users behavior better than could their spouse. With 300 likes, they claimed to know you better than yourself.
Cambridge Analytica, the US offshoot of SCL, claims to have 5000 pieces of data on each of 220 million US voters. What do they know about us and how have they been using it? They can track our reaction to words and phrases and then shape their messaging accordingly (think about the addition of the angry emoji and other more incremental reaction tools on facebook). The information obtained through bio-psycho-social profiling is being “weaponized,” to use a favorite term by former Breitbart CEO Steve Bannon, now White House strategist and Trump whisperer. The objective is “cognitive warfare.” Put together with our “google-fueled, Wikipedia-based, bog sodden” way of life, the stakes associated with evaluating information on the internet, or any source, go way up.
Now just to spice up this “milieu stew” let’s add one more ingredient and that is our current treatment of the idea of “political correctness.” We heard this used again and again in this past election. This phrase originated in the 1950s McCarthy era as a sarcastic reference to Stalinist Russia where one could be punished for not parroting the “official” line. It became employed early in the culture wars in the 80s and forward, again, to disparage in a sarcastic manner, any attempt to acknowledge and show respect for some other-than-dominant (most often Christian) white-group. It has morphed during the past election cycle into criticism by the perceived down-trodden (mostly the Christian Right and poor working class whites—and there is some overlap) that their rights are being displaced by minority groups. In connection with this latter sentiment, religious liberty has morphed from the free exercise of religion delineated in the 1st Amendment to the Constitution to the right to use one’s religious beliefs to defend one’s prejudices.
So, we have a large portion of the general population that persists in a low-level understanding of the rudiments of civic knowledge. Additionally, I argue that we must acknowledge evidence of long term anti-intellectualism in the United States. Further, this ignorance and its associated attitude constitutes a condition toward which current communication techniques and technologies are being employed to sway public opinion in ways that many of us would say are authoritarian and inimical to American values. And, words and ideas are being re-contextualized for differing purposes.
In the words of Tolstoy: “What shall we do and how shall we live?” I believe that this is a question that each of us must answer. I want to propose one starting place to formulate an answer. And that is to analyze the kind of unique power that each of us has to employ in this fight, because in the end, I believe that solutions to these pervasive problems will involve a power struggle—one that begins with each of us as individuals, giving renewed poignancy to the phrase: “Think globally, act locally.” We do have power, even though for some of us, as the poet Alfred Lord Tennyson said, “We are not now that strength which in old days, Moved earth and heaven.” Remember: we know a thing or two because we’ve seen a thing or two.
I want to use a well-known and serviceable, if old, study of power by French and Raven that I feel is germane to consider at this juncture. This is because the strength of one’s impact in speaking out will be affected by the overall power one brings to the task.
As you may recall French and Raven identified five different kinds of power in their classic 1965 study: 1) coercive, 2) legitimate, 3) reward, 4) expert, and association. Coercive does not apply so much to our discussion here as we cannot make anyone attend to our lofty pronouncements (well, my wife may have that power over me). This type of power pertains to more of a military or incarceration situation. Legitimate power is that which comes from an official position that we recognize as rightfully held, usually associated with an organization or bureaucracy. Reward power, as it sounds, involves being able to bestow some desired result on the recipient. This type of power may come into play if your audience sees your contribution as valued return on the investment of their time in listening to you. Expert power, closely related to reward power, is clearly a pertinent type of power if you are an independently recognized and desired source of the knowledge being sought. Finally, there is the power of association in which a public identifies with speaker for reasons in addition to other kinds of power be they expert or legitimate: you may know them personally or there may be some non-rational draw to them.
For one example, let’s apply this to my talk tonight. First, I have no coercive power here. I cannot make anyone follow what I say, particularly this group who’s not-so-hidden mission is to argue about everything. Regarding legitimate power, I am a duly invited and elected member of this august body, so at least I have some minimal legitimate standing to be holding forth tonight. Any reward power I may have in this instance depends upon whether, by the end of this talk, you feel like you have some positive resonance with what I have said or at least were somewhat entertained--in either case it being a positive trade-off for your time spent. Expert power is not supposed to come into play in this group as we are to speak outside of our areas of expertise, but I suppose there could be some expert power residing in one’s ability to make a convincing argument. Finally, there is associational power. Does our individual relationship involve some dimension that draws you to what I am saying? Is it that my white hair cries out “wisdom.” Or we’re good enough friends that you’re extending me a credibility “line of credit” thereby giving me the benefit of the doubt. Or by the same token, there could be a negative
attribution arising out of association. You’ve heard something suspect about me, so you believe that I’m talking out of my . . . depth. The cumulative power that accrues in this calculus will determine the extent to which you as a hearer will be impacted by these remarks. It also works the other way of course. Who we listen to and are persuaded by depends upon our assessment of that speaker’s collective power.
As I alluded to above, my wife, in addition to other kinds of power (aka charm) has some measure of coercive power as reflected in the saying “ain’t mamma happy, ain’t nobody happy.”
In a small example of legitimate power, we all defer to (and count on) Judy’s role in scheduling us to speak and Mike’s role in alerting us monthly of our meeting. For reward power, perhaps timely service of food, drink, and processing of our checks by Tony’s staff. For recognized expert power in law we would acknowledge Max or Darius, or for questions within the physical sciences, Lynn or Paul. The power of association--the self-congratulatory good will extended to each other in the spirit of “for [insert correct pronoun]’s a jolly good person.”
I believe that we must consider each of our audiences in a similar way. We all have circles in which we move where our influence may be exerted. Again Tennyson, “that which we are we are, one equal temper of heroic hearts, made weak by time and fate, but strong in will. . .”
I am reminded of the famous Pogo cartoon where he says “We are faced with insurmountable opportunities.” Regarding the attitude that we take into this fray, a couple things come to mind. Cornel West visited UNI last year and someone asked him: “In the face of all this, are you optimistic?” He said, “No, but I must do this anyway.” I take that sober reflection with a longer perspective we should all recognize from Martin Luther King: “The moral arc of the universe is long and it bends toward justice.”


Sunday, October 23, 2016

Digital Distractions & Digital Overload: Maybe Nicholas Carr (The Shallows) was right! Supper Club Speech--Jan. 19th 2016 Cherie Dargan

Cherie’s Supper Club Speech
On slideshare

Final CD Supper Club Jan16 Digital Distractions & Digital Overload Final

1. Digital Distractions & Digital Overload: Maybe Nicholas Carr (The Shallows) was right! Supper Club Speech--Jan. 19th 2016 Cherie Dargan
2. Overview • Watch a brief interview with Nicholas Carr, author of the best seller, The Shallows • Consider evidence of digital distractions and digital overload -- infographics • Discuss several compelling quotes from the book The Shallows • Share experiences working with students in face to face and online classes struggling to focus • Suggest a prescription for all of us wanting to learn how to focus in the midst of distraction
3. The Problem: I want MORE I’m writing a weekly blog and really enjoying it. ….I love doing research online with my PC, iPad, or iPhone, but find myself searching for more information even when I think I have enough. There is a hunger, a desire, even a lust for MORE information, and more visually based information—photos, videos, and infographics. Turns out, I’m not alone.
4. We’re spending 11 hours a day on media, including our various devices with screens! http://www.geekwire.com/2015/nielsen- reports-that-the-average-american-adult- spends-11-hours-per-day-on-gadgets/ Nielsen reports on media usage: chart by Statista
5. Mobile Devices give us Access to the Web, 24/7
6. Look at how much more we can do online! HCC, Spring 2012 -- iPad pilot. Apps, Social Media, Games, HCC websites. London Internet Cafe, March 1999—Mike checks his email.
7. We have too much to do! (email, files, pics, posts, texts) We are living in the age of digital overload—we get too many texts, email messages, social media posts, tweets, pins, & alerts to read and respond to in any given day. We are filling up our hard drives. We can't keep up with the flow of information, entertainment, news, and cat videos. We don’t want to miss out on anything!
8. The Data Explosion (2014) (Infographic) http://aci.info/2014/07/12/the-data-explosion-in-2014-minute-by-minute-infographic/ Susan Gunelius. “Data Never Sleeps. The Data Explosion in 2014, Minute by Minute – Infographic.” JULY 12, 2014 According to this article, every minute: · Facebook users share nearly 2.5 million pieces of content. · Twitter users tweet nearly 300,000 times. · Instagram users post nearly 220,000 new photos. · YouTube users upload 72 hours of new video content. · Apple users download nearly 50,000 apps. · Email users send over 200 million messages. · Amazon generates over $80,000 in online sales.
9. Infographic on the Brain & What it Wants! http://neomam.com/interactive/13reasons/ “13 Reasons The Brain Craves Infographics” (The animated version: a timer at the bottom tells you how long you have been reading the infographic). What is an Infographic? A visual packed with facts.
10. A few facts from “13 Reasons” 1) The use of visualized information has increased · 400% in literature since 1990 and 9900% on the internet since 2007. 142% in newspapers between 1985 and 1994 2) We are visually wired: Almost 50% of your brain is involved in visual processing and 70% of your sensory receptors are in your eyes 3) Infographics help because we suffer from information overload We get 5 times the information as we did in 1986 We get 34 gigabytes of information (or 100,500 words) on an average day. On average, we only read 28% of words per visit
11. “13 Reasons Your Brain Craves Infographics”
12. The Interview with Nicholas Carr https://www.youtube.com/watch?v=cKaWJ72x1rI “What the Internet is Doing to Our Brains.” Published on May 6, 2013. Interview with Nicholas Carr, the author of The Shallows: What the Internet is Doing to Our Brains.
13. Follow up to the video I use this video with my Composition students and it helps them to understand what is happening in their brains when they go online. If you are interested, you can check out the companion video that explains some of the “hidden gems” in the video. I’ll send out the link to the presentation on Google Docs. https://www.youtube.com/watch?v=_Yf_-5VHiR0 -- Hidden Gems in, "What the Internet is Doing to our Brains"
14. Nicholas Carr’s website http://www.nicholascarr.com/
15. True Confessions: Not a Fan at first! When the book The Shallows first came out, I gave it a quick look and thought it was rather pessimistic, and put it aside. I was looking for a highly readable text for my students, and didn’t think this was it! As one of my favorite professors, Dr. Bob from Buena Vista would say, there aren’t many pictures and lots of big words. Lately, I’ve been taking another look…..
16. Why is this digital distraction happening? The net is changing how we respond to information as well as how people are formatting information online (little chunks of info, lots of visuals) We aren’t reading as much and the way we read is changing (scanning and skimming) Carr argues that our brains are being rewired and that we are constantly seeing new information. We are also being OVERLOADED with information!
17. Carr: Switch from Reading to Power Browsing Most Web pages are viewed for less than 20 seconds. The switch from reading to power-browsing is happening very quickly and it represents a deeper change in our thinking. The digital environment encourages people to explore broadly but at a superficial level. Patience with reading long documents is decreasing. There is a compelling urge to skip ahead. Skimming is becoming the dominant mode of reading. Of course there are compensations, positive aspects of this. Every medium develops some cognitive skills at the expense of others. (pages 135-139)
18. Carr: The Net is an interruption system "The Net is, by design, an interruption system, a machine geared for dividing attention." (131) "Frequent interruptions scatter our thoughts, weaken our memory, and make us tense and anxious." (132) "The near-continuous stream of new information pumped out by the Web also plays to our natural tendency to 'vastly overvalue what happens to us right now….'" (134)
19. The map, the clock and the book Without going into too many details, Carr argues that humans have been changed by these three inventions--or tools of the mind, as he calls them. Maps gave us a sense of where we are and where we want to go: they helped us to make sense of the world Clocks gave us a way to measure time but also changed the way we saw things, as people began to divide time up into chunks, with certain times reserved for certain activities (Chapter 3) The Clock and map also gave us new metaphors and expanded language and thought. Books came along later and brought more changes (chapter 4).
20. Books, Gutenberg & literacy Carr discusses the development of writing, and its significance, as well as the role of Gutenberg’s printing press in chapter 4. He describes it as one of the most important inventions in history (69). Francis Bacon wrote that only the inventions of gunpowder and the compass had impacted human affairs as much. •The number of books produced in the 50 years after Gutenberg’s invention equaled the number produced by scribes during the previous 1000 years (69) •It became possible to buy books, to have libraries, and literacy was encouraged. •By the end of the 15th century, more than 250 towns had a printing press and produced over 12 million books.
21. Carr: the screen VS. the book "After 550 years, the printing press and its products are being pushed from the center of our intellectual life to its edges." "The world of the screen…is a very different place from the world of the page. A new intellectual ethic is taking hold. The pathways in our brains are once again being rerouted." (77)
22. Distracted & Overloaded!
23. Signs of Digital Overload * · My drop box alerts me that it is full and will not sync until I remove some files. · My sister calls because she can no longer upload new pictures to her computer: I talk her through the steps and we rediscover she has filled up her hard drive with pictures and videos. · Apple offers to switch my iCloud account to double the storage for about the same amount of money. I did it on the spot and watched my storage space DOUBLE instantly. (Who says you can’t buy happiness?) · My students tease me whenever I bring up my Hawkeye Email-- you have a thousand unread messages??!! Yes, I subscribe to a lot of email newsletters! (from my Blog Post for Nov. 20 -- Digital Overload)
24. The Net is subsuming our other technologies It is "becoming our typewriter and our printing press, our map and our clock, our calculator and our telephone, our post office and our library, our radio and our TV." (83) We never really have to disconnect. TV watching has not declined but we are devoting much less time to reading words printed on paper. The old technologies become a cultural dead end. The new technologies govern production and consumption, guide people's behavior and shape their perceptions. (89) Changes in the form change how we use, experience and understand the content. (from The Shallows)
25. The Book VS. The Web "Research continues to show that people who read linear text comprehend more, remember more, and learn more than those who read text peppered with links." (127) Ironically, Geeky Grandma loves her Kindle and ebooks, while the majority of my students say they prefer print books but do not seem to “read” them very carefully.
26. An Aha moment! My students stare down at their smartphones--to check the time, to check for a new text, to check their scores on the Canvas app (our online CMS), or to check for an email that I just mentioned sending to their class. Some read an ebook and many have used the navigation on their phone to get to a new destination. They don’t tote around big laptops for the most part: the smartphone is their clock, map, and book.
27. Technology’s impact on Higher Ed (Go Web) What have we seen in the past 20 years? From chalk boards to smart boards, and internet access in classroom From Books to eBooks, plus YouTube Videos, and online course management systems for all classes, whether online or F2F Consolidation of book publishers who are investing heavily in online tools Teachers report attendance and final grades online Email and other communication tools encourage communication with students, who would rather text, call or email than show up at the office Most teachers give some or all of their tests online, and create drop boxes for assignments which are graded online, so tie into an online gradebook
28. Technology and Workload I found a wonderful quote by Richard Beasley on a blog post about Digital Overload: “If you are not careful, technology can actually increase your workload rather than increase your productivity.” This was my experience this past fall, when we switched to a new Course Management system. I had no idea how much time it would take to recreate five websites & then grade almost all online.
29. Do the math...1100 hours on Canvas, Fall 15 I spend many hours online during my workday, using Canvas, our new Course management System to teach both Face to face and online classes. I use Canvas for tests and worksheets, collect work with drop boxes, post announcements, and have all of my handouts organized in five separate webspaces, one for each class. By Finals in December, I had spent approximately 1100 hours on Canvas. That works out to 61 hours a week for 18 weeks (from the first week of August, rebuilding those websites through Finals, grading final essays and exams, and recording final scores). That is 8.7 hours a day, 7 days a week. I also spent time IN class!
30. Reward? Tendonitis in my Shoulder https://www.nlm.nih.gov/medlineplus/ency/article/000438.htm
31. Other Effects: Stress, Exhaustion The effects of digital overload leave us exhausted and overwhelmed. They distract us, delay us, and take our time and energy. Like the ancient Greek God who pushed a rock up the mountain only to have it roll back down, we end the day sometimes feeling triumph that we’ve checked certain tasks off the list, answered email, graded, responded to students, recorded points, tweeted, texted, posted responses to a status update done early in the day—and just as we go to turn away from our PC or laptop or iPad or Smartphone, we realize there are new messages, new tweets, new texts, and all of our progress seems undone. And, we are the grown-ups! What is it doing to our young people?
32. Social Media: an “ugly, evil distraction” Last semester I had several startling conversations with a handful of students who confessed they are struggling with college, having a hard time paying attention in class, and not able to focus on their homework for long due to digital distraction. A number of my Composition students wrote about it in their essays. One girl called her addiction to social media an “ugly, evil distraction” that led to her flunking a class in high school—and not being able to participate in a sport she loved. That was her wakeup call.
33. Digital distractions in teens given laptops 1:1 The first girl confessed to me that her High School had been early adopters of the One to One program in 2010—giving every student and teacher a shiny new laptop. She had been thrilled and quickly found herself on Twitter, Facebook and Pinterest, where she had organized ideas for decorating rooms; unfortunately, her homework was always last on the list. Her school rushed into the project without a lot of planning, and teachers were not prepared or trained. Classes were chaotic in the early days, with too much time for students to spend on social media, ignoring assignments. She told me, “my brain was elsewhere...taking pictures, posting stupid tweets, and reblogging pictures.”
34. Not just an isolated case…. I heard variations of this story half a dozen times more and my concerns grew. Several students used almost the same words to describe the battle in their minds for getting organized, getting homework done, and staying off social media and/or their phones in class and later at home, while they were supposedly studying. Then, I read a report on CNN that freaked me out!
35. Being 13 -- Special CNN Report (Oct. 2015) CNN’s Anderson Cooper did a special report on “Being 13: Inside the world of Teens” (Hadad), and found that many of these kids check social media 100 and even 200 times a day. Teens don’t post that often; instead, they lurk to see if others liked their postings, or to see if anyone is saying mean things about them. Likes are a way to measure popularity. They also take LOTS of selfies--100 or more, to get that perfect picture. The study looked at 200 teens and included an analysis of 150,000 posts and messages by two trained psychologists. According to CNN’s Hadad, “The level of profanity, explicit sexual language and references to drug use surprised the experts, considering the study's subjects were only in eighth grade.”
36. Competing for attention
37. What can you teach in 15 minutes? If teens are checking their social media and texts 100 times a day, let’s narrow that down into 15 hours, from 7 am to 10 pm, and that means that every 15 minutes, teens are on their phones checking for updates or uploading selfies. If they are checking 200 times a day, that goes down to every 7 minutes. What can you accomplish in 7 minutes between checking for updates on social media? How about in 15 minutes? How do we teach children to problem solve, think critically, or reflect in 7 minutes? How do they learn complex Math and Science concepts and master formulas in 7 minutes?
38. What’s going on? Being 13 Then & Now Think about what it was like when you were 13, sitting in an 8th grade class: maybe you were lucky and had at least one friend in there. You might write and pass a note, but you were expected to have the book open, be taking notes or working on math problems or completing a worksheet over something you read for class. If caught, your note might be read by the teacher—or you might be asked to read it out loud. NOW, think about a classroom FULL of 13 year olds all with smartphones: for one thing, it would be noisy with lots of little alerts that new tweets, posts, and text messages were waiting for attention. How does the teacher compete for their attention? Or should she just sit back and check her own smartphone? According to the National Education Association, many schools are lifting bans on the use of cellphones in the classroom (Kinjo) and the Pew Reports did not seem to indicate that phones were staying in backpacks, purses and pockets during class.
39. What is ahead for these 13 year olds? My student wrote, “ I hope that students, teachers, parents, and members of communities can see the problem that is becoming an epidemic, and they will do something to fix it for themselves, their children, and their future.” She hopes to become a teacher herself and worries that her students will be giving their attention to their devices instead of her. How are we going to deal with these students and their mobile devices? How successful are they going to be in their educations and careers, much less their relationships? I’m retiring so I don’t have to look forward to teaching these students: but I already see digital distraction and digital overload in my college students.
40. My college students: Hypervigilant & Impatient Attending to every audible alert or vibration of their smartphones is destroying their focus, and eroding their ability to go more than a few minutes without checking their phones for a new text, tweet, photo, or status update. As noted earlier, the mere mention of grading an assignment sends them to their phones to check grades; furthermore, there is an impatience on the part of students to have work graded, and I sometimes have to say, “Look—this isn’t the drive through window at McDonalds! It takes time.”
41. Is a lack of focus the new normal? As I near the end of my teaching career, I wonder what is down the road for Education at all levels. Students need the ability to focus on a piece of text in order to read, analyze and write about it; they need to concentrate in order to solve mathematical problems, do their science labs and write up the results, and listen to short lectures and then engage in discussion. Digital distraction and digital overload make those things difficult, if not impossible.
42. What needs to happen? Some Proposals…. • Read Carr’s books. (The Shallows, 2010 and The Glass Cage, 2014) • Make the Jitterbug the phone for ten year olds! (Why do ten year olds need phones? Some phones are now marketed to SIX year olds!) • Educate parents, teachers, school boards, and administrators: BAN STUDENTS from having phones in the classroom, since it leads to continual use of them to the expense of focus and attention. If they aren’t checking them every 15 minutes or taking selfies, maybe they can focus and learn!
43. Some Proposals, cont. • Make sure students are ready for 1 to 1 programs as well as the teachers, infrastructure, and curriculum. • Educate parents about digital devices and young children: limit the time spent on the devices. • Teach students how to unplug from technology in order to reflect, read critically, and focus. • Some students (and adults) need serious intervention and some digital detox!
44. Here is help: Digital Detox, anyone? http://www.digitalblackout.org/about/ Digital blackout website--has a program to help schools show students the value of unplugging. CAN THEY GO THREE DAYS WITHOUT FACEBOOK? TWITTER? TEXTING? WHAT MIGHT THEY LEARN?
45. More Practical Suggestions I give my students • Set priorities for each day. I carry a small clipboard with me to classes and meetings: it helps me to keep on task and make note of things I need to do. • Get something done before you let yourself get sucked into social media early in the day. Those cat videos, political rants, holiday recipes, and photos of the grandchildren can wait • TURN OFF ALL OF THE ALERTS that you can possibly handle on your smartphone, tablet, and laptop.
46. More Practical Suggestions, cont. • Lower our expectations! It’s okay to reflect before firing off an answer to an email or text. It’s not a speed test in High School typing class. (Youngsters, ask a Baby Boomer about “typing” classes). • Set aside time each week to delete the glut of digital data clogging up our lives and PCs. • Evaluate email newsletters and unsubscribe when possible! • Consider making mealtimes a device-free zone: Make eye contact, smile, and talk. Wow!
47. How to Focus: a Mind Map The mind map was created by Jane Genovese. https://www.pinterest.com/pin/36451078204422650/ As an image pinned on Pinterest http://learningfundamentals.com.au/presentations/focus/ Posted on her website.
48. Mind map
49. What do you think??? So, what about us -- the grown ups? Engineers, Professors, Librarians, Business people, Journalists...are we any better? In spite of spending 1100 hours online in the fall, I still like reading print magazines and newspapers, and read both ebooks & print books. I often pack my iPad, iPhone, a small notebook or clipboard. I see many of you with smartphones, tablets, and hear you discussing books you’ve read. So, do those of us who read for decades before we went online have any different hard wiring? are we better able to withstand the onslaught of digital distraction and overload?
50. What do you think, cont.? • Have you noticed more people staring down, not making eye contact, and more focused on their devices during a meeting, meal, or while out in public? • Have you found yourself feeling distracted and overloaded by your devices? • Are you concerned about the findings of the CNN Report about 13 year olds, and surprised at all? • Do you see any strategies on the mind map that you are using?
51. Works Cited http://neomam.com/interactive/13reasons/ “13 Reasons The Brain Craves Infographics.” Neomam.com. Carr, Nicholas. The Shallows: What the Internet is Doing to Our Brains. Norton. 2011 http://www.nicholascarr.com/ Nicholas Carr website http://www.digitalblackout.org/about/Digital Detox. Digitalblackout.org.
52. Works Cited, cont. https://www.pinterest.com/pin/432697476678222230/ Genovese, Jane. “How to Focus in the Age of Distraction.” Pinterest pin. http://aci.info/2014/07/12/the-data-explosion-in-2014-minute-by- minute-infographic/ Gunelius, Susan. “Data Never Sleeps. The Data Explosion in 2014: Minute by Minute.” Infographic. 12 July 2014. http://www.cnn.com/2015/10/05/health/being-13-teens-social-media-study/ Hadad, Chuck. “Being 13: Teens and Social Media Study.” CNN. Oct. 13, 2015
53. Works Cited, cont. https://www.youtube.com/watch?v=_Yf_-5VHiR0 Hidden Gems in, "What the Internet is Doing to our Brains.“ http://www.statista.com/chart/1971/electronic-media-use/ Richter, Felix. “Americans Use Electronic Media 11+ Hours A Day Mar 13, 2015 https://www.youtube.com/watch?v=cKaWJ72x1rI “What the Internet is Doing to Our Brains.” Published on May 6, 2013. https://vialogue.wordpress.com/2013/10/10/the-shallows-notes-review/ Notes from The Shallows – from vialogue.wordpress.com



Saturday, October 22, 2016

Monday Morning Quarterbacking A Supper Club Talk Lynn A. Brant October 2016




Monday Morning Quarterbacking

A Supper Club Talk
Lynn A. Brant
October 2016

There has been an unwritten (and quite often not followed) rule that one must not speak on one's professional area. I'll be delving into history a bit in this talk, but my only professional component in history is "earth history". I have often talked about events of hundreds of millions of years ago, but tonight I will go back only seventy some years.

A letter to The Atlantic magazine once claimed that if the colonists had been a little milder in their rhetoric in 1775 we might have avoided the Revolutionary War, and over time, our differences with England would have diminished. England eliminated slavery in 1833, which would have avoided our Civil War. Then in 1914, Germany would have seen that England, with the strength of the United States behind it, would have been too great a foe and would not have started World War I. Without WW I there would not have been WW II. Our country would be like Canada and all would have been at peace.

As somewhat of an Anglophile, I find this pleasing; however, it is total and utter nonsense. Maybe the war with England would have started in 1833, or maybe that would have been the start of our Civil War. Maybe there would have been some other world war with Maine and Mississippi on opposite sides. Who knows?

Looking back in history, one can make reasonable cause and effect connections to show that A caused B which then caused C, but eliminating A in the past would not have created a vacuum of events from that time forward. If no A, then no B and C, but we might have had events D, E, and F. The progression of the course of humanity through time is an almost infinite series of actions taken by millions of people at every turn. Once an action is taken a whole new set of options then exists.

John Lewis Gaddis in his The Landscape of History uses the metaphor of a landscape for the past. The historian cannot visit that landscape but tries to understand it in her mind. It is a landscape partially shrouded in fog. Even if we had a time machine to take us to that past landscape, we would still have only a view of it no larger than that of just one person. We cannot get into the minds of the people of the past except as documents and other accounts permit. This is the job of historians.

A characteristic of history is that it is chaotic: meaning that outcomes are very sensitive to initial conditions. As Gaddis points out, the actions of a Hitler or a Lee Harvey Oswald altered everything from that time forward. But what if the person making the cartridge that went into Oswald's rifle had had a moment of distraction that allowed an imperfect shell to not fire when Oswald pulled the trigger? That or any of a million other things could have altered the events of that morning in Dallas? Every moment is the beginning of the rest of history and the outcomes are sensitive to the conditions at that moment.

The participants of history - that being everyone who is alive - also cannot know at the time how things will work out in the future. We make decisions on what we think will produce some desired effect, but we can never be sure. As Yogi Berra once said, "prediction is difficult, especially about the future" (more or less). Looking back and examining a moment in history and then predicting what would have happened in light of different decisions people might have made at that time is fraught with the same difficulties. We simply don't know what would have been the outcome in world events if Oswald's rifle had not fired. Would there have been the Viet Nam War or the civil rights laws that were passed during the Johnson administration?

To criticize or condemn the decisions made in history by other people is Monday morning quarterbacking. It's easy to criticize from a distance of time and space, and say, "they should have done ..." or "they never should have done ...". We who are making those post hoc evaluations were not there. We are even limited in using history for our own decision-making because the future landscape will not be that of the past. Think of the wisdom and utility of Maginot Line which was built with the historical knowledge of WW I in mind!

The summer of 2015 marked the 70th anniversary of the end of World War II. Although I was only three in that summer of 1945 and little aware of world events taking place, the War shaped my life in many ways in the seven decades since. This was especially true of my service in the Navy during the Viet Nam War where I served with and under men who fought at Midway and other places. We were even using some of the weapons from World War II in Viet Nam.

Many analyses of the War have been made since 1945, and many of the participants have written books telling their view of the conflict. Many of these books are self-serving, and all the authors had only a limited perspective of that great struggle. All the important decision makers are now dead, and none of us was there, but people like to argue about what happened and what should or should not have happened. These arguments amount to Monday morning quarterbacking.


On the morning of the 6th of August, 1945, the United States used an atomic bomb on the Japanese city of Hiroshima with an explosive force equivalent to about 15-20,000 tons of TNT, and it wiped out the center of the city and something like 100,000 to 150,000 lives. Three days later the U.S. used another atomic bomb on another city, Nagasaki, killing at least another 50,000 people. These were the only atomic weapons that have ever been used in warfare. More than 70 years after these events, several countries around the world now possess the ability to go to war using nuclear weapons, but so far they have not been used. On and around the 70th anniversary of the bombs I saw lively discussions online about whether using the bombs was morally and militarily justified. This topic is ripe for discussion and contemplation and many people have strong feelings about whether we should have used those weapons in 1945. But the fact is that they were used and many people died. It is part of world history, and nothing will change that. But the Monday morning quarterbacking is still going on.

Looking back, decades after the events, many have put forth various arguments why we should or should not have used the bombs. Many of the arguments for and against using the bombs have merit, but all are somewhat affected by the distortions inherent in all historical accounts. No one in this room was in or near Hiroshima or Nagasaki in 1945, nor were any of us in the decision-making roles in 1945. We must rely on personal accounts handed down and documents in libraries and the like. Even more so, we depend upon historians who have analysed these accounts and documents. Many of the first-person accounts are very biased in attempts to make the authors look good or to justify their decisions in wartime. Others at the time have expressed views and opinions based upon their very limited awareness of conditions surrounding events at the end of the war. As Bob Robinson once said to me that anything written within the first fifty years of an historical event is questionable. Good, well-researched history, written by impartial analysts, after the passions of the events have cooled down, come the closest. And as Gaddis says, even those accounts are subject to different interpretations. In addition to analysing the War itself, the events and decisions made during the War have had long-lasting consequences, and the 70 years that have elapsed since the War gives us an opportunity to assess those consequences. Other than seeing the effects on those two cities at the time, no one in 1945 could have foreseen the long-term influences of those bombs on how the events have worked out over these intervening years.

Many of those claiming that the United States should not have used the bombs against Japan seem to base their case on three main arguments. First, using the bomb to kill innocent civilians was an immoral act. Second, they claim that the war was essentially over, Japan was defeated and about to surrender and the use of the bombs was pointless slaughter of civilians. Third, we should have exploded one bomb in a remote area in a demonstration of its effects so the Japanese would realize the potential destruction of their country if they did not surrender. I think each of these claims is fraught with logical weaknesses, and I want to explore these just a bit.

Unlike some other wars that the United States has been involved in, we entered World War II with great reluctance. The Nazi war machine was running over Europe for more than two years before Japan attacked Pearl Harbor and then Germany declared war on us in December of 1941. Japan quickly ran over much of the Pacific as Germany was destroying civilization in the other half of the world. We weren't fully aware, at that time, of the atrocities that were being carried out by these two military powers, but we knew we had no choice but to fight an all-out war. Before it was over in 1945, millions of people died, many more millions lost spouses, children, parents, and friends. Many were made homeless as their cities were attacked, and in many cases, destroyed. The war was not fought on isolated battlefields away from civilian populations, but fought on land, in the air, and on the sea over very large portions of the planet.

New technologies were applied in this massive killing machine: the latest battleships and aircraft carriers, heavy, long-range bombers, advanced fighter aircraft, radar, proximity fuses, napalm, jet aircraft, ballistic missiles, atom bombs, and more. And the war was led by a fanatical attitude of the Japanese that saw a greater dishonor in surrender than in complete annihilation of their country. Hitler wanted the Germans to fight to the last man. There has never been any war on the face of the earth that was like World War II. We had no choice about fighting that war and no choice about winning it.

Morality is defined as the character of rightness or wrongness, and being in accord with principles or standards of right and wrong. An act is moral if it is in accordance with a set of standards of good conduct. But where do these standards originate? E. O. Wilson explains that humans are an eusocial species with the capacity to have empathy and altruism, in that they sometimes place the welfare of the community above their own individual welfare. Part of being human is the recognition and necessity of a moral code. Morality permits individuals to get along harmoniously within society. But this moral code is not etched across the cosmos; it changes through time and among different groups of people, and the capacity for altruism apparently goes back in time to before our species evolved. What might be moral at one time within a certain group might not be moral in another time and place. Think of the changing attitudes toward slavery, race relations and the worth of and rights of women. The dentist from Minnesota who shot Cecil, the lion, would have been within the moral code of a century ago but found his actions at odds with many Americans in the 21st century. Alcohol passing over the lips is regarded as immoral by Methodists (at least when I was growing up), but Lutherans take communion using real wine!! And Unitarians drink the stuff for fun!

But the peace-time moral code breaks down - and may even become a disadvantage - in war. When the survival of one's community is at stake, the dominant rule that applies is to survive and to win the struggle, and the moral code is altered to fit. The altruism is now directed only toward members of the group one sees oneself as being in. Now the welfare of one's companions take precedence over the individual's welfare. A soldier runs out into the rain of shrapnel and flying bullets to save a buddy, not just to win the war for the United States.

Especially in World War II, the distinction between combatant and civilian disappeared. Those in uniform under immediate fire were supported by all their comrades who worked toward victory: the fellows who loaded the artillery shells into the guns, the guys who ran the engines of the aircraft carriers that launched the bombers that went after the enemy planes and ships that threatened the Marines on Guadalcanal and other battlefields. But this chain of support ran back to the scientific labs designing new weapons, the factory worker who produced those weapons, the woman who riveted the wings on the fighter planes in St Louis, and the farmer in Iowa who grew the grain to feed those guys dodging bullets on the battlefield. These people were no less part of the war than the ones in immediate combat. This was also true of the Germans and the Japanese.
The moral code becomes "to do one's duty", to contribute to winning the war, to defeat the enemy, and if necessary, to kill and maim, to render wives widows, and children orphans. A PBS series a few years ago was about an American fighter pilot over Europe after the D-Day landings. He told of strafing German soldiers and having to decide whether to aim slightly differently to take out a soldier who was about to escape his guns. He said he knew that soldier probably had a wife and kids, a mother and father, and hopes for a long life, but that same soldier might kill an American soldier the next day. He had no choice but to touch the rudder and fill the man with bullets. The life of the potential American soldier became more important than the life of the enemy combatant. His duty was clear. The moral code of war took precedence over our peace-time ideas of right and wrong.

However, "you can't escape thinking about history in moral terms" says Gaddis in his The Landscape of History. "The reason is that we [humans] are, unlike all others, moral animals." Gaddis quoting R. G. Collingwood says, "History cannot be scientifically written unless the historian can re-enact in his own mind the experience of the people whose actions he is narrating." Gaddis goes on to say, "The resulting impressions will never be the same as your own." I take this all to mean that we can evaluate the morality of people and their actions in history but we must be extremely careful. Evaluating the morality of Hitler is easy; that of others not so. Judging the morality of the atom bomb must take into account more than the horror of the killing of two hundred thousand people at the time.

There are two other considerations that apply to historical events, and especially to World War II. The first of these is the recognition that nobody had a complete, synoptic view of all the events and conditions at the time. This is often referred to as "the fog of war" but it applies to peacetime events as well. All wars involve new technologies and tactics, but World War II was outstanding in its use of new weapons and the need to meet new challenges. Much of the War was trial and error. We tried bombing ball bearing plants because we thought that would bring down the Nazi war machine, but it didn't work. We bombed German aircraft plants, but that only partially worked. When we bombed their oil supply and synthetic oil plants we got some real results. We also bombed rail marshalling yards to considerable effect, but that killed a lot of civilians. We finally bombed the hearts of cities resulting in more deaths. After we started to use B-29's against Japan we leveled city after city. We used bombing campaigns in what we thought would shorten the War to avoid more loss of lives on our side. In addition, the Germans were developing new weapons at a fast clip, such as jet fighters and ballistic missiles, which made victory as quick as possible an overriding concern. We had to do what we thought at the time was necessary to end the War, and to end it quickly.

I find it interesting that the use of the two atom bombs against Japan is considered by some as a moral issue, but the fire bombing of Tokyo in March of 1945 that killed nearly as many people is rather forgotten. There is also the matter of the killing of hundreds of thousands more in other bombing campaigns. Is the killing of 100,000 people in a millisecond flash less moral than killing the same number over several hours in a firestorm? The objection to the use of atomic bombs against Japan on moral grounds has its weaknesses, unless one wants to argue that our whole war effort was immoral and that we should have never fought. But not fighting the enemies in World War II would have been encumbered by many bigger moral questions.

To criticize the morality of Truman and his advisors suggests that our morality is superior. Can we justify that? What gives us reason for such a belief? Had we been there and knowing what was then known (not what we know now), would we have acted differently? Some of us would and some wouldn't - that is the nature of conducting a war. And we don't really know which of us would have made the wiser decision over the course of history. And who among us have had the burden on our shoulders of having to win a war?

The second criticism of using the atomic bombs was that Japan was defeated and about to surrender, and that the bombs were not necessary; the bombs were overkill, if you like. But this argument is also weak.

I've never done this experiment but it is claimed that if you throw a frog into hot water it will immediately jump out. However, if you place a frog into a pot in cool water and then gradually heat it up, the frog will not jump out. The frog gradually gets used to the rising temperature and will cook to death. Of course, we see lots of cases of this kind of thing among humans - the rise of violence and poverty in our cities, for instance. This also applied to Japan in World War II. Although a long way from being defeated after Midway and Guadalcanal, Japan's fortunes were headed downhill after those battles. Once we took the Marianas from where we could reach all parts of Japan with our B-29's and after we effectively destroyed their navy at Leyte Gulf, Japan had essentially no chance of winning. Without a navy, her troops, scattered across the Pacific, could not be brought to bear, and she was cut off from vital supplies. The Battle of Leyte Gulf was in October of 1944 - more than nine months before Hiroshima. During those nine months we took Iwo Jima and Okinawa at great cost. Just in the battle for Okinawa, there were 12,000 Americans killed and 36,000 wounded. The Japanese lost 110,000 soldiers and some 150,000 civilians who were killed. We bombed city after city, including that raid on Tokyo in March of 1945 - five months before Hiroshima. The Japanese themselves had determined that they had lost the war by as early as January 1944, a year and a half before Hiroshima.

The Tokyo raid on 9 March destroyed over a quarter-million buildings, left over a million people homeless and more than a hundred thousand dead or wounded. Two days later we hit Nagoya with 1790 tons of incendiaries, and two days after that B-29's dropped 1644 tons of incendiaries on Osaka. Then Kobe was hit three days after that by 2400 tons wiping out much of that city. In just eleven days we flew almost 1600 sorties against these four key industrial cities. After March of 1945 no tanker reached Japan to bring in the oil she needed to carry on the war. Those in command of running the war, were getting used to defeat. They were numb to the destruction of their military and their cities. There was no question about Japan being defeated by the summer of 1945, but when would she surrender?

What the atomic bombs did was to throw, in a matter of speaking, a splash of hot water on the frog. But even after the destruction of Hiroshima, the Japanese Supreme Council for the Direction of the War was deadlocked. Calling for one last great battle on Japanese soil, General Anami, the war minister, argued against surrender and is quoted:

Would it not be wondrous for this whole nation to be destroyed like a beautiful flower?" (quoted from Hopkins)

There was the big question among the Americans about how to end the war. The Soviet Union was prepared to enter the war on the 15th of August. There were American plans to invade the home islands with a million troops in what could be expected to be very heavy losses. The Japanese had two and a half million troops on the home islands plus they were conscripting all males between 15 and 60 years of age and all females between 17 and 45. These "civilians" were being armed with everything from bamboo spears to carpenter awls. Everything the Japanese could use to kill Allied soldiers was being prepared for this final, and awful, bloodbath. Truman later wrote that he had asked General Marshall about casualties if we were to invade Japan, to which Marshall indicated a quarter million Americans casualties. That did not include Japanese losses.

Truman had advisors in his decision to use the bomb. A highly secret committee was established consisting of eight members, that included three prominent scientists, which met in May of 1945. At the end of the month the committee met for two days with an additional advisory panel consisting of four physicists involved in building the bomb: Enrico Fermi, Arthur Compton, E. O. Lawrence, and J. Robert Oppenheimer. The committee and scientific panel were unanimous in recommending use of the bomb against Japan as soon as possible, and to use it in a way that it would serve as a demonstration of its effects.

In mid-July the allied leaders met at Potsdam, outside of Berlin. Shortly before meeting with Churchill and Stalin, Truman learned of the successful test of the atom bomb. The Potsdam Conference put together terms for Japan to surrender, terms that some thought were quite generous. This offer was at first turned down, but it was essentially accepted by Emperor Hirohito the day after the bombing of Nagasaki.

An invasion of Kyushu was planned for November, 1945, and the Americans, including Truman, expected the war to last well into 1946. If Anami's suggestion to allow Japan to be destroyed had been followed, the bombs did indeed save perhaps millions on both sides. The war was definitely not over!

David McCullough, the historian, writes:

"And how could a president or the others charged with responsibility for the decision, answer to the American people if when the war was over, after the bloodbath of an invasion of Japan, it became known that a weapon sufficient to end the war had been available by midsummer and was not used?"


This leaves the third question of whether it would have been better to blow up some unoccupied area to demonstrate the power of the bombs. Truman's secret advisory committee considered this option and decided against it. I find this Monday morning argument particularly weak.

Disregarding the practical aspects and logistics of exploding the bomb (which we weren't even quite sure would work) in a place that would make a mental impact upon the Japanese Command, what would such a demonstration accomplish? Singed grass and blown-over palm trees aren't very convincing. What it would have shown the world is that we then had a very powerful weapon, and every country would decide they needed one too without understanding the true effects such a weapon could produce. The bombs we had in 1945 were not one-off, never-to-be repeated devices; they were the opening of a brand new technology that would eventually spread around the world. Singed grass and blown-over palm trees do not have the emotional and intellectual effect of seeing a leveled city, a woman's dress pattern burned into her back by the intense radiation, and a best-selling book, Hiroshima by John Hersey, describing the effects of such a weapon. Would Hersey have written about singed grass and blown-over palm trees?

Within a rather small number of years several countries had the bomb; but not just the bombs measured in terms of tens of kilotons but in terms of tens of megatons. Bombs one thousand times as powerful as the ones used on Hiroshima and Nagasaki were in the hands of the Soviets and several other countries.

We can make a good argument that Hiroshima and Nagasaki were indeed demonstrations - realistic demonstrations - of the effects of atomic weapons. The very fact that no nuclear weapon has ever been used in over seventy years, effectively bolsters that argument. Had several countries gone on to develop thermonuclear weapons, as they did, and then plunge the world into a war using those weapons, more people would have been killed than in all of World War II. The demonstrations on Hiroshima and Nagasaki may have done much more than ending that war; they may have saved the world itself over the following decades. Singed grass and blown-over palm trees would not have done that.

Referring back to The Atlantic letter suggesting that the colonists rhetoric was too strong and that history would have been much different had we been more polite, our not using atomic bombs in World War II would have also affected history in unknown ways. The war may have lasted a few more weeks or months, many more would have been killed in conventional warfare, and the Russians might have insisted upon a division of Japan like Germany. Of course, we'll never know what might have happened. But imagine a divided Japan controlled in part by the Soviets: would that have been good for the Japanese? Monday morning quarterbacking can't evaluate these imponderables.








Readings:

Ambrose, Hugh, 2010, "The Pacific, HBO, 489 pp

Bradley, James, 2003, "Flyboys", Little, Brown, and Company, 398 pp

Cutler, Thomas J., 1994, "The Battle of Leyte Gulf: 23-26 October 1944", Naval Institute Press, 343 pp

de Waal, Frans, 2013, "The Bonobo and the Atheist", W.W. Norton & Co., 289 pp

Gaddis, John Lewis, 2002, "The Landscape of History: how historians map the past", Oxford University Press, 192 pp

Hersey, John, 1946, "Hiroshima", Alfred A. Knopf, 152 pp

Hopkins, William B., 2008, "The Pacific War: the strategy, politics, and players that won the war", Zenith Press, 392 pp

McCullough, David, 1992, "Truman", Simon and Schuster, 1116 pp

Miller, Donald L., 2006, "Masters of the Air: America's bomber boys who fought the air war against Nazi Germany", Simon and Schuster Paperbacks, 671 pp

Thomas, Evan, 2006, "Sea of Thunder", Simon and Schuster, 414 pp

Wilson, Edward O., 2014, "The Meaning of Human Existence", W.W. Norton, 207 pp