I recently tried to open some archived files, and… couldn’t. Some are old novellas I wrote for National Novel Writing Month, and the successors (directly or indirectly)( to the word processing software I used (AppleWorks, which used a ClarisWorks file format) has evolved to no longer recognize those files. Others files from same software’s spreadsheet feature, contained indexes of hundreds of rolls of film in my art archive, an indexing those was effort that I am loathe to repeat. Those CWK files also seemed to be locked to me forever.
In the past, online people recommended the same in/direct successor software that had already failed me. Despite this, I searched afresh, and read a helpful comment on someone’s similar plea for help that LibreOffice (the successor to OpenOffice) could likely open old CWK files.
LibreOffice did it! I was able to open both the novellas and spreadsheets immediately, and save them as modern, non-proprietary file types.
I donated to the Document Foundation immediately, in support of their goal to help creators own their own content. I also recommend LibreOffice strongly for this purpose. This is a great tool.
Home | LibreOffice – Free Office Suite – Based on OpenOffice – Compatible with Microsoft
Free office suite – the evolution of OpenOffice. Compatible with Microsoft .doc, .docx, .xls, .xlsx, .ppt, .pptx. Updated regularly, community powered.
Note: if you’ve got old files that you have been backing up but not migrating to newer software, and their content is important to you, consider translating/converting it into contemporary, non-proprietary software formats and/or other “archival” formats appropriate to their content type.
If you really treasure something, you may also consider making hardcopies of it, if that is practical. If you don’t have access to a home printer, want it printed quickly, or want special binding, you can have your files printed in any number of sizes and formats (bound, loose leaf, printed as books) at a standard office support and copying services. Some of my writing from the 1990s was not portable to other computer platforms or rival software packages, and so the only form I have that work in now is hardcopy. Paper works!
I’ve finally read this clear and well-organized book about the design of data-centric automation tools, and how their potential has been often squandered or misused. We can do so much better!
O’Neil is a math Ph.D. and professor who went into industry and was distressed at how proprietary algorithms are being used in potentially harmful real life situations without thoughtful oversight. The fact that technology is involved at all leads to something like blind faith from the businesses and organizations that apply it. She firmly believes algorithms CAN be used for good, but won’t under current approaches. You can’t have good outcomes if the goal is to make a quick buck, keep the approach secret, and never improve it! These tools are too often used in ways which only reinforce existing inequities.
Her examples are thoughtful and described in depth.
A major flaw in data automation is the use of proxy data, and I was glad to see this called out. How do you measure if someone is a good teacher, if they would be a good employee, if they should receive a good deal on your product, or if they are a risk to the community? Without a single, obvious thing to measure, people make stuff up that is easier to quantify, and then encode their wacky idea into an “objective” measurement that doesn’t really measure the subject at all. The wacky measurement is then obscured as proprietary secrets, and sold as as a product to businesses, who want answers cheaply more than they want accuracy. The less regulated the industry, the wackier some of the data and measurements become.
For example, good teaching is hard to measure, so instead the system may measure a change in test scores… but if the students were already getting all As, there is no improvement possible, so the teacher may be marked down, and not know why. Unscientific personality tests may be used to screen potential employees, or robots may just scan applicant resumes for keywords, without any real indication that those tools result in better employees.
Many of these approaches are NOT ready for real world use, but are used just the same. O’Neil cites the Michigan automated unemployment auditing system that falsely accused thousands of unemployment fraud, which destroyed livelihoods (and marriages), as a great example. That error is still playing out, and will play out in the courts for a long time, per this Detroit Free Press article: Judge: Companies can be sued over Michigan unemployment fraud fiasco by Paul Egan & Adrienne Roberts (March 26, 2021).To quote from the article, “The state has acknowledged that at least 20,000 Michigan residents — and possibly as many as 40,000 — were wrongly accused of fraud between 2013 and 2015 by a $47-million computer system, purchased from FAST, that the state operated without human supervision and with an error rate as high as 93%.” Officials blindly launched this system without human checks, because yaaay, technology?
As someone who keeps being asked by one credit agency about cars I’ve never owned and pet insurance I’ve never purchased, I know that we’ve already automated some data projects badly. O’Neil cites other professional data scientists who have proposed sensible industry standards, and she has additional, more specific suggestions on top of this.
I can hope that the popularity of this book, which was a NYT Bestseller, can push decision makers into making better, more ethical, more fair decisions as a result of her ideas.
Are you interested in surveillance capitalism and how we are going to survive it? McSweeney’s Issue 54 is for you! While it was published in 2018, it remains completely topical.
At the time I’m writing this, active dis- and misinformation campaigns from a variety of sources are hoping to influence the public in multiple countries to change their votes or behaviors with “stories” about the elections, the global pandemic, civil rights, race relations, protest voting, and other topics. These campaigns are using technology to inexpensively spread their messages, often through unwitting social media consumers, if not through individuals easily converted to new causes online. Meanwhile, a major social media platform is being criticized for secretly changing its algorithms to favor right-wing figureheads have been chummy with the company’s CEO. The manipulation had measurable, real financial impacts on sidelined news organizations, though the changes were hotly denied at the time. (You can read more here in Clara Jeffries’ Twitter Thread on this topic , which has some great links to other resources on this story.) There is no obvious path in this development to hold this platform accountable for its actions, or to keep it from giving resources to bad actors to spread misinformation.
There are companies using technology you enjoy to change your behavior, and the strange discomfort you feel about what you’ve shared with them (and their business partners, seen and unseen) is based on real concerns.
Issue 54 isn’t a compilation of the single-breach/oversight articles you’ve already read. This thoughtful collection of essays spans the technical AND the philosophical: the embrace of daily life surveillance by both “free” capitalist societies AND repressive regimes; the way data is used to maintain existing power structures, so majority communities tolerate surveillance at the expense of law-abiding minorities whose efforts for social justice are violently repressed; how individuals receiving any social services are forced to give up data about their families that the wealthy can keep to themselves; and what could happen if we reframe privacy from an individual choice to a community-wide asset, whether information is demanded by government authorities or corporate entities selling our data for profit.
This collection is SO THOUGHTFUL. In a world where people are programming their own biases into AI, it’s also quite urgent. I recommend this collection with great enthusiasm – and concern about where we think we’re going, versus where we actually seem to be going.
I sometimes think I expect too much. I’m reading McSweeney’s 54: The End of Trust, and thinking of my early (early 1990s) ideals around the Internet and personal computing, which were both evolving so rapidly.
Confession: I thought that everyone I knew would be doing AMAZING THINGS with this technology, especially once software and hardware evolved for key activities like music composition, video editing, digital painting, and more. I was certain that everyone I knew had a hidden composer/filmmaker/author/artist in them just WAITING to get out.
But… it’s 2020, and most people I know use this truly amazing content creation/sharing infrastructure to: post photos of their (purchased meals), post photos of themself drunk, make their children self-conscious by “sharenting” (oversharing as parents), repost unreliable information found on unreliable source pages of dubious origin, or watch (and repost) cat videos.
*me staring at the reader with alarmed expression*
Yes, there is some nice citizen science stuff, and the libraries are doing a great job, but I EXPECTED that.
No, really, I didn’t expect [gestures toward the screen] this. I thought tech would revolutionize education and communication and film and science in ways that… HAVE NOT HAPPENED. I did not anticipate “social media” where people talk about themselves endlessly, and misrepresent how they look and live to their “friends.” I was not cynical enough to envision paid “influencers” peddling products. I would not have anticipated the web being used to hype imaginary events like the Fyre Festival. I did not think so many people had an inner scam compulsion and/or marketing ambition and/or cat obsession to resolve.
I… feel like such a wacky optimist.
ANYWAY, the End of Trust is really good, and I can tell because I am using up tape flags over things I want to dwell on further. It’s a great sign. I’ll write about it, of course.