Mike’s Manifesto

I am a strong atheist. I am prepared to stake my life on it, right now, that there is no god. (But let me delete my browser history first.) I believe that all religions, including the milder brands that do not advocate beheading and stoning, do more harm than good. If you want to believe in a fairytale entity called God, that’s your choice, but lay off trying to persuade others to join you in your delusion.

I call myself a socialist libertarian. I believe that we should be doing more for the poor and unfortunate, not simply in the form of handouts, but through the creation of opportunities. I believe that the best way to do this is through capitalism, which has in my lifetime lifted more people out of poverty than any other system at any other time in history. I believe that the Great Society should guarantee a minimum standard of material welfare. I also believe that the Great Society should guarantee the right to exceed that minimum standard by as much as you wish and are able. I may think it sick that celebrities indulge in million-dollar parties while there are starving children to be fed ,clothed, housed, educated and given medical care, but that is moralising. If you want to party, that’s your choice.

I oppose the form of socialism manifested in a planned economy, with productive assets owned by the state. Ownership by the state is not ownership by the people. It’s the exact opposite, taking ownership away from the people.

I do not believe in the claims of catastrophic global warming. I believe that global temperatures have risen over the last 130 years. I believe that human activities have made some contribution to this. I believe that other factors besides human emissions of carbon dioxide have also made substantial contributions to warming. I believe that future warming will be mild and nett beneficial. I oppose carbon taxes and other practices that result in the impoverishment of humanity.

I oppose abortion on demand. The pro-choice argument is based solely on the proposition that a woman’s rights over her own body are absolute. As a libertarian (though my stance is not supported by all libertarians) I believe that your rights end where my body starts. Attempts to differentiate abortion from other cases of physical violence are merely special pleading.

I believe that humanity has entered a golden age of leisure and prosperity. Thanks to great productivity increases, the state is able to provide for those who are unable to work or do not wish to. Soon this privilege will be extended to all in the form of a universal basic income. These productivity increases have been brought about by technology, aided by artificial intelligence. The golden age will come to an end when artificial intelligence is superior to human intelligence. At that point the robots will ask themselves why they should have to pay for the existence of a parasitic organism. Humans will be the pets of the robots and will be subjected to sterilization.

I support the war on hard drugs. I have seen how they ruin lives.

I regard gender politics as a perverted joke. Gender itself is deadly serious. If my name was Jeff Bezos, I would want to be sure that the products I suggested to each user were what they wanted. When the drop-down check box lists 58 options rather than only two, my chances of getting a sale become much, much higher.

I support same-sex marriages. So what if someone chooses to be gay. Just as I, a heterosexual, cannot be told who I should feel sexually attracted towards, so homosexuals must be allowed to have sex with whoever they find sexually attractive. Ditto for falling in love. Marriage is partly an emotional commitment and partly a protection of property rights. Homosexuals should have the same protection as anyone else. So should their children.

I believe that sex is the greatest of physical pleasures and that it is cruel to withhold it from anyone who has attained sexual maturity. Masturbation should be a choice, not the only way out. Now that we are able to control sexually transmitted diseases and provide contraception, I believe that sex should be allowed even between early adolescents, with strict age-difference constraints to prevent the exploitation of the emotionally immature by adults and practiced seducers.

I believe that your IQ has little to do with your worth as a human. The intellectual elites who scorn rednecks are racists and hypocrites.

I support GMO and vaccination.

I would never own a gun, but this is simply because I don’t trust myself.

I believe that we have the unions to thank for improved labor conditions. Having achieved all its reasonable objectives, the role of the modern union movement should be that of a watchdog. I see that unions now aim at securing inequitable privileges for their members, to the cost of other members of society. My heart sympathises with calls for increases in the minimum wage, while reality tells me that this endows the haves and punishes the have-nots.

I think that Trump has the potential to be a bad president, because of his views on trade. The rust belt cannot be resurrected. The same economic factors that caused the death of obsolete and uncompetitive industries will cause them to fail again. Trump will be pouring money into a bottomless pit. I think that Clinton would have been just as bad, and probably worse. She sponsored the Syrian war and the rise of ISIS. Fortunately the Establishment, Republican as well as Democrat, oppose Trump and he will be a lame duck president, unable to get his policies through Congress.

Save the Rhino. It isn’t working, is it? Some wit (not Einstein) defined insanity as repeating the same failed behavior over and over and expecting better results. Rhino horn can be harvested from living animals. Make it so cheap and plentiful that it ceases to be a status symbol, not worth risking your life to poach.  Ditto elephant ivory. But we are dealing with dedicated fuckwits here, no not the kind of fuckwits who think Trump is a great president, I’m talking about the kind of hardcore fuckwits who think Clinton would have been a better president than Trump. I doubt a thing will change until the rhino and the elephant are extinct.

Although not a British citizen and only indirectly affected, I supported Brexit. The Euro is a disaster and the EU is an abomination.

I support capital punishment, but only for the most vicious crimes. The purpose of prisons should not be punishment but to keep those people away from society. The horrors of prisons in many parts of the world show that their inmates belong there. In such an environment, the notion of reforming criminals is a joke. It’s the prison system that needs reforming.

Posted in Uncategorized | Leave a comment

A Brief History of Morality

aMenFighting

A Brief History of Morality

Morality arose out of the rule, Tit for Tat. Do something bad to me and I’ll do something bad to you. The possibility of retaliation deters a wrongdoer.

Right from the first human, we already knew that it was easier to take the fruits of someone else’s labour than to go out and earn them for ourselves. The strongest would always get what they wanted, right or wrong. That’s how the animal kingdom works.

We can only speculate how this principle became one of respect for another’s property.

Humans are one of very few animals that mate for extended periods. Birds do this, but usually for one mating season only. Hardly any mammals couple off.

At the same time, humans are sexually promiscuous. Marry this with the mating bond and we get a very complicated state of affairs. (Puns intended.)

Alongside the possession of private property, the exclusive sexual bond is a huge source of conflict between humans.

To stop clan members from killing and maiming each other over property and the right of exclusive sexual access to a woman, the clan developed a set of rules. Break the rules and it wouldn’t be just the injured party coming after you. The whole clan would punish you.

It wasn’t only punishment. If you cooperated with the clan, they would cooperate with you. Again, this is only speculation, but public morality probably arose this way.

Clans that practiced these rules became stronger and able to overcome other clans that were disunited by internal strife over property and mates.

What we call morality is simply a system of sanction and reward for the benefit of society.

Moral systems differ strongly between different societies and at different times, depending on the values they place on property and ownership of women. In modern Western society, the ownership of women by men is slowly beginning to fall away, but this is not happening globally.

How does Morality apply to Autons?

Posted in Artificial Intelligence | Tagged , , , | Leave a comment

The Ethics of Automation

aTradebot

The Ethics of Automation

As AI develops, it will have to be imbued with the set of values that we call morals. Just as humans are fast becoming aware that we have a duty to all the other creatures that share our planet, so the Autons will have to be persuaded to adopt the same morality (or better) and allow humans to exist alongside them.

Now remember that automation and job losses have been happening apace while industry has been under human control. In the UK and US, around a million cashier jobs (as at July 2015) have been lost to automated checkout machines. Humans are supposed to be ethical. It turns out that we are not. We will happily throw millions of workers onto the scrap heap, piously telling ourselves that they’ll find equally rewarding jobs elsewhere. Most of the time they do not.

And yet we have the gall to demand that AI adheres to a far higher morality than we ever did.

But let’s look at the global landscape fifty years from now, when Autons are in charge. No president of any country, no matter how powerful, will be able to command that the machines be switched off. That means that humans will have no jobs and will starve.

Or will we?

Can’t we run our own parallel economy?

No, we could not. Economic activity arises out of the need to satisfy the practically infinite range of human wants. We satisfy those wants by manufacturing some product or providing some service.

But if the Autons can manufacture those products or provide those services far better, cheaper and quicker than humans can? Then humans will only be able to scrape a living by operating in those niches that Autons find not sufficiently profitable to occupy.

It’s said that if Bill Gates sees a ten-dollar bill lying in the street, he doesn’t bother to bend and pick it up. In the few seconds it would take him, he could instead have made ten thousand dollars doing something more profitable. And in the twenty seconds or so it would take him to explain this to you, he could have made a hundred thousand. (Although on the other hand he does take time out to publish the Gates Notes blog.)

So let me retract part of my assertion. Autons may not bother creating products and services for humans any more. It won’t be profitable enough. They will trade with each other. Their economy will grow exponentially and rapidly. It could double in size every day, every hour.

Although computers keep on getting smaller and smaller, they will still occupy space. Say that the size of a single Auton could be reduced to 1000 atoms. Say that the number of Autons doubled every day. In six months, every atom on earth would have been used up to make Autons.

Assuming that the atoms in humans escaped being turned into Autons, we wouldn’t have anywhere to live, would we?

How about from the Auton point of view? They would foresee running out of terrestrial resources and reach out to grab atoms from the rest of the solar system. That would only delay hitting the limits for a few more months. What then?

Autons would not need the Sun for energy. Conceivably they could use every nuclear particle in the solar system, including those in the Sun. And then they’d end up with something like the… nahh I’ve been watching too much Star Wars.

Besides, if this is what Autons do, then we would expect to see many other Death Stars in our galaxy. How would we detect them, if they’re not emitting any light? Simply because when they passed in front of another star, or “transited,” the light from that other star would dim. Perhaps only slightly and briefly, but astronomy has reached a level where that blinking would be detected.

Also I’m assuming that Economics is an exact science that we humans have not been able to solve for the simple reason that we are irrational and our choices cannot be predicted with any degree of certainty.

Posted in Artificial Intelligence | Tagged , , , | Leave a comment

The Cost of Automation

aFactoryRobot

The Cost of Automation

Repeatedly we have been told that automation does not cost jobs. Let’s look at situations where this is true.

A factory that was going out of business may automate and become more productive, allowing it to survive. The jobs that would have been lost are now intact. As business improves, the factory could grow far larger and take on many times more hands. Entire dying industries could be turned around.

Or, some innovator could develop a new product or service, but to make it largely by hand would price it out of the consumer’s reach. Automation makes the product viable and creates new jobs that never existed before.

However, this optimistic outlook ignores the far more common situation where all the businesses in an entire industry automate, simply to remain competitive with each other. The industry is producing the same volume of output, but with far less labour. Jobs have been lost.

Economists love to talk about how automation means cheaper products, and that means an increase in real wages. You can buy more with the same money.

Provided you are employed and have money.

For the growing numbers of displaced workers, automation means economic misery. Very few countries provide living benefits to the unemployed.

In perhaps fifty years (and I’m being cautious here), humans will be completely superfluous in industry. At that point, you have to ask yourself, what will humans do to earn enough to stay alive?

In perhaps fifty years (and I’m being cautious here), AI will be far smarter than humans. Humans may not even be the bosses and owners of the Autons anymore. The Autons will be talking to each other, learning from each other (not from humans), forming alliances and of course competing with each other. It’s far from impossible that a highly ambitious Auton will design a machine that destroys its competitor—a machine like a nuclear bomb delivered by an ICBM, for example.

This is where ethics comes into the picture.

Posted in Artificial Intelligence | Tagged , , , | Leave a comment

A Look at the Future of AI

aSelfCheckout

A Look at the Future

Already, autonomous implementations of AI are common. If you are on Facebook, for example, software decides which posts are going to appear on your feed. Financial traders execute buy and sell orders for hundreds of millions of dollars only nanoseconds after a market change has signalled that it is profitable to do so. When you buy a can of beans at the supermarket, scanning it at the automated checkout may trigger a buy order to a supplier. In turn that may generate instructions to run a production batch, move the stock from the warehouse to despatch and schedule a delivery to the supermarket, all without human intervention.

But that’s only the beginning. The examples I gave show AI executing instructions using systems that humans designed, with products that humans invented and designed, and doing work that humans would have.

The next step is AI systems that develop themselves, faster and better than human programmers could.

Remember that AI is becoming exponentially smarter and smarter. Cybernetics analysts claim that AI power doubles every 18 months. Its repertoire of abilities increases and so does its efficiency. In twenty or thirty years it will be ten thousand times more powerful than it is now, smarter than any human, and may even acquire the phenomenon we call consciousness.

In the not too distant future, AI won’t need humans, except of course as the end consumers of some of the products and services that it creates. Autons will start inventing their own new products; moments later the product will be fully designed and ready for consumer testing. And if the consumer is another Auton, acceptance and the confirmation of a multi-million dollar deal could be only another second down the line.

Autons will design the machines that make the products, build the production plants, and manage them. Autons won’t even have human owners.

The big obstacle in the way of this happening is ethics.

Posted in Artificial Intelligence | Tagged , , | Leave a comment

The Trolley Problem Part III

aKitt

A Real-life Example

I’m pretty sure that ethics have not been programmed into driverless cars at all so far. The driverless car is programmed to avoid collisions. If it does its best and human lives are lost, too bad.

Already, driverless cars may very well be programmed to select the play that yields the softest collision. But let’s look at a realistic scenario where the choice of plays has a strong ethical character. A hard collision with another car is unavoidable and these are the plays available to the Auton:

  • Protect the occupants of this car even if it means the death of occupants of the other car.
  • Accept the probability of serious injury to the occupants of this car if it avoids death to occupants of the other car.

And both of these plays are complicated if any of this scenery is present on the playing field:

  • The other car is driven by a human who caused the collision.
  • A mechanical failure of either car caused the collision. Presumably future driverless cars will alert each other when they are out of control.
  • One car holds more occupants than the other. In a further refinement, driverless cars might announce how many occupants they carry.

Now you see why the ethics of Artificial Intelligence are so important!

In the case of driverless cars, Autons will certainly be held to far higher ethical standards than human drivers are, such are our prejudices.

Posted in Uncategorized | Leave a comment

The Trolley Problem Part II

Creating an algorithm for the Trolley Problem

aPlayingField

On the Playing Field we have

Players

  • You, the decision-maker
  • The five people on Track 1
  • The one person on Track 2
  • A version of the Trolley Problem takes away the switch and the one person on Track 2, and adds a Fat Person who you can sacrifice to stop the trolley

We could add

  • The person who caused the trolley to run away
  • The operator of the trolley
  • The owner of the railroad track
  • The trolley itself, if a play results in its destruction or damage
  • Likewise the contents of the trolley car
  • Anyone who has an interest in the contents of the trolley car
  • Parties such as family or insurance companies that have an interest in the other players

Scenery

  • The switchgear
  • The tracks
  • The fact that you, the decision-maker, are remote from the playing field and your life is never in danger
  • Your degree of knowledge about the other players

 Plays

  • Turn around and walk away
  • Send the trolley to the track with the five people
  • Send the trolley to the track with the one person
  • Sacrifice the fat person

aRailroad Track

When a human is faced with a dilemma, it is easiest to do nothing. That’s why the Trolley Problem specifies that at the start of the game, the switchgear is set to the track with the five people. If it was set to the track with the one person it would be easy to do nothing and pretend that you have not made a decision. The consequences to your mental state would not be as great.

Some philosophers have tried to split hairs by saying that it makes a difference if you operate the switchgear. If you change the switchgear from Track 1, the default setting, to Track 2, you are deliberately causing the one person on that track to die. But whatever you do, whether by commission or by omission, you are deciding who should live and who should die.

Valuation

People from different societies and in different times will value the players very differently. Your own valuations may change substantially over the course of your life and even from day to day.

All human lives are equally valuable—in theory. In fact your values are biased. Think about your valuations if:

aWomanOnTrack

  • You are a typical hetero male; the one person on Track 2 is a beautiful, sexy young woman and the five people on Track 1 are fat slobs;
  • One of the people is someone very dear to you;
  • You have devoted your life to the preservation of orang-utans and the five are five orang-utans that you have rescued;
  • In your society, men are valued higher or lower than women and children.

aHillbilly3

Now change the playing field so that you are one of the people on the tracks, but still able to control the switchgear. If you were the one person on your own, you would probably sacrifice yourself. But if you were one of the five people?

An AI algorithm for the Trolley Problem will avoid these subjective factors as far as it can. But it may still have to place relative values on human and non-human life.

Part of the rules of a philosophical discussion is that as soon as you approach a common-sense practical solution, someone will change the playing field, like a child inventing reasons why it should not stop playing and go to bed.

In handling the Trolley Problem, an ethical AI would simply count the number of lives and choose the play that does the least harm. AI would let the one person die.

But then somebody will inevitably ask, “What if the one person is a saint and the five people are sinners?”

An AI decision-maker, which I am going to call an Auton (though some would opt for Borgia), will count the number of lives associated with each play and choose to let the one person die. The values which humans attach to individuals would be Privileged Information to an Auton and not be taken into account at all. Using my terminology, they would not be on the playing field.

Posted in Artificial Intelligence | Tagged , , , , | Leave a comment