Quote from: Roll on February 27, 2018, 01:43:33 PM
I speak generally of explicitly technological progress, which while branching certainly can be clearly defined by any reasonable standard as moving forward, insofar as it isn't impacted by extraneous circumstances (ie: catastrophic events, mass book burnings in history, etc.). Cultural and sociological progress is a different thing entirely, and outside the scope of driver-less cars discussion.
As I understand it, "forward" can only genuinely refer to the natural progression of causes and effects - being humans rather than photons or some other subatomic particle, time is fixed for us - and I'm sure we can all agree that causes happen before effects. I do not view increasing complexity, or the filling of 'notional space' (a concept that refers to the "room" that ideas and notions can occupy in a socio-psyco-cultural space, the implication being that there is a maximum carrying capacity) as being a good thing, and right now I'm
very skeptical that the net positives outweigh the net negatives - or that the net positives are actually such, and that the rubric used to determine them isn't just plain broken.
QuoteI did not say that technological progress was morally good, on the contrary as I explicitly pointed out the subjective view of this issue being bad in the views of some. I simply stated that you can't take what favors you (again, the very fact of our conversation) without also taking what does not. Technology is inherently neutral, and its morality or goodness is determined by its use, but it is virtually impossible to pick and choose. Particularly considering that it is subjective in the first place.
I also speak of it from a logistical perspective, not that you shouldn't stop it, but that you can't, not practically. The advancements are being made, and short of millions of people going Unabomber, there's not a whole lot you can do about it, that ship has long since sailed. The better avenue is to focus on handling it and the fallout properly. (Ie: Proper government handling of structural unemployment to go back to the automation issue.) (Though in the instance of driverless cars, I would certainly suggest that you shouldn't. The good far outweighs any potential bad.)
I would disagree that all technology and all objects are neutral - I'd hardly consider a guillotine or conversion therapy (a psychological technology) to be neutral, let alone discriminatory language, language being a communicative technology. Also, if technology was neutral, then why is so much of it illegal? Certainly, culture doesn't agree with this notion, and since there's no escaping culture,
I couldn't, in good faith, argue that it's neutral since there's no escaping cultural norms, only trading one for another.
From my research, it wouldn't take a million Unabombers to undo this, because we're setting ourselves up for structural failures pretty well as it is. The more complex a system is, the more moving parts it has, and the more opportunities for a black swan event to happen. Driverless cars are not an antifragile technology, and neither, frankly, is the whole of digital infrastructure. But as far as structural failures and resources bubbles are concerned, Gail Tverberg, an actuary internationally known for her work on energy and economic forecasting, has a lot of pretty good information about it. There are too many potential places to start there, but I'll suggest
this slideshow presentation.
I'd say it's hubris to assume that the greatest liability the system has is the incompetence of the user - rather, the greatest liability is the incompetence of its designers, which cannot ever be engineered away.
ETA: Oh, her most recent one,
Nine Reasons Why Globalization Can't Be Permanent is also relevant to the discussion, driverless cars and all - especially since car parts are made in so many places, and digital infrastructure can't really exist without tightly-connected global markets due to the nature of computer hardware production.