Attempt #6: Bad Beef, database trouble and eco-friendly corpses

This week:

The UN gets clever people to look at important issues sometimes, and what they said recently is “Gee, all this land being used to rear cattle is being used quite poorly for the planet – if we start deforesting the rainforest because everyone wants to eat cows and dairy, that’s going to suck, so maybe don’t eat quite so much of that stuff”. They also noted that people don’t like being told to eat less beef. Then, just a few days later, farmers in Brazil started burning down the Amazon rainforest and people got upset about it. (IPCC, Nature) [1]

Looking at far away stuff, like galaxies, requires bending light from distant objects in such a way that it makes them look big enough for us to see. Doing this with glass or polished mirrors can cost billions of pounds and a lot of work. The Earth’s entire atmosphere bends light from celestial objects too. So by setting an “eyepiece” out at roughly the distance of the moon, David Kipping from Columbia is suggesting just using the whole planet as a telescope instead. (MIT TR)

A nuclear explosion happened in northwestern Russia during a weapons test and some people got very badly poisoned from radiation. Radiation is just atoms breaking apart and spewing their bits everywhere, and because those atoms can pretty easily get from one place to another, radiation poisoning is somewhat contagious. At the hospital to which the patients were brought, no one told doctors or staff that the patients were in a nuclear blast, endangering doctors and their other patients. It’s also hard to tell how serious this is, because Russia stopped two radiation monitors from giving accurate data about the magnitude of the blast. (BBC)

The last time people were testing lots of nuclear weapons, scientists and philosophers like Einstein and Russell got together to kick up a fuss and say “Can you please not,”. In fact, it helped contribute to some non-proliferation agreements and won the movement a Nobel Peace prize. The world has lots of other problems today and since trust in scientists is slowly rising, some people want them to start kicking up more fuss in an organised way again. (Nature) [1]

To work out where someone was, you can gauge their location with the mobile phone towers that supply signals, by checking which tower their phone was connected to. If you record that information, you can share it with the police to help work out of someone was implicated in a crime. That’s only useful if you actually correctly match the phone tower data to the phones when making your police database, and when you don’t, you might end up providing incorrect evidence to 10,000 criminal justice cases, as happened in Denmark. (NYTimes) [2]

Burning dead bodies isn’t great for the environment, and sticking them in boxes under the ground isn’t either. But decomposing them into compost for plants would be nice, if we could get microbes to do so. One of the problems is getting enough oxygen to all the microbes around the body it so they can eat it properly. The solution? Rotate the body in a vessel like a doner kebab to ensure airflow and spread microbes and heat around properly. (CBC)

[1] Public engagement: 

For reasons only somewhat related to my own ego, I read a lot of stuff about how scientists need to engage with the public, share recommendations with policymakers, be advocates etc. And I was amused by this particular quote from one of the co-chairs of the IPCC working group, Hans-Otto Pörtner, – “We don’t want to tell people what to eat… but it would indeed be beneficial, for both climate and human health.”

I’m willing to wager that telling people what to eat is one of the few things he would, in fact, rather like to tell people. Perhaps even the thing he’d like to tell people the most.

There was a similar tone adopted in the EAT-Lancet commision’s report earlier this year which, similar to this IPCC report, got lots of clever people to sit down together, review all the evidence and unsurprisingly (but quantifiably and demonstrably, which is the important thing here) found that eating a lot less meat and a lot more plants would be good for the planet and, if done properly, likely better for most people’s health. But the entire wording of the commission goes just as far as saying “You see, it would be really good if people in rich countries stopped eating as many cows. Like, about as good as all the CO2 emissions prevented from using nuclear power,” as opposed to “STOP EATING BEEF NOW OR WE ALL DIE.” 

I’m amused by the more conciliatory tone of voice because it very neatly plays the “don’t be a climate alarmist” card that a lot of climate communication strategists are now pushing for. And probably rightly so! Few people like a downer, or a doomsday-er or anyone who tells them after a long and hard day they have more work to do. So, spin it positively, I guess.

But there’s a broader sentiment here beyond talking about the climate, to anything that science bears on that has an effect on people’s lives. Every year the NHS publishes guidelines on how to not ruin your own body via drinking and then people, annually and habitually, get very angry about it, with the UK’s own health secretary calling them “diktats”. As though telling you how not to die early is a nuisance.

Which is why the Nature editorial calling on scientists to band together like the Pugwash days of anti-nuclear proliferation, to me, seems frankly dated and off the mark. If you can’t get the man down the pub (and I don’t say this disparagingly but as an example of the kind of actual, existent person that research often fails to reach) to take a positive interest in preserving their own health, then constructing a globalist cabal of scientists who organise to routinely chastise society about the climate… well it’s not the best PR move.

Which is not to say anyone doing research in a field with a direct impact on public well-being should shut up, or not produce reports like the IPCC/LANCET ones and tell people about it. Nor would I disparage someone like Greta Thurnberg for telling it like it is.

Simply that, as I mentioned in Attempts #3 and #4, the way that people are influenced and come to learn things is radically different from the ‘50s and like anyone wanting to effectively communicate anything, you have to tailor a message to its audience. Including high ranking politicians.

Fortunately, since then, there has at least been a bit of research on what gets people to do things that are good for themselves and for society. And while I agree that there’s an unspoken paternalism (and perhaps insidiousness?) behind organisations like the Behavioural Insights Team (which has become semi-independent from the UK government since its inception), I think one of its advisors, who helped set it up, has a good point.

“If you want people to do something, make it easy.”

Interestingly, there was another Nature editorial piece this week that seemed to push for exactly that

[2] Dodgy databases

There is, rightly, a lot of concern about how machine learning algorithms are being applied in the public sphere, and in particular the criminal justice system.

And I agree, we should be scrutinising algorithms that claim to predict reoffending, identify individual with facial recognition in public spaces and try and assess whether someone is lying from their facial expression (currently being trialled in European airports). All of that stuff has a lot of ethical controversy, demonstrated dubiousness and lack of transparency behind it (as mentioned in previous Attempts).

But then a problem like this comes up and reminds us we are far from living in the future. Or if we are, it’s of the William Gibson “unevenly distributed” sort. Recording data properly and transferring it without something going wrong is the perennial task of most computing. Hell, copying data from one place to another is literally what the internet was invented for!

So when this happens – 

“The authorities said that the problems stemmed partly from police I.T. systems and partly from the phone companies’ systems, although a telecom industry representative said he could not understand how phone companies could have caused the errors.”

– it gives one pause for thought. Like, sure, worry about all these newfangled technology issues coming up everywhere related to machine learning. But worry more fundamentally about the usual problems with technology that haven’t gone away, even if they’ve changed form and scale, over the last 50 years.

We know that improperly represented data is often, in fact, one of the biggest problems with training a deep learning algorithm so perhaps I’m artificially separating concerns about the two problems when they are in fact the same thing. But it’s still worth bearing in mind that the old addage from the ‘60s still applies to many (if not most) modern problems with technology: Garbage in, Garbage out.

As a matter of fun historical note, no less than Charles Babbage (inventor of the analytical engine, the first mechanical computer) was actually saying as much in the 1800s:

“On two occasions I have been asked, “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.” 

Or to translate  to modern vernacular “If you put a bunch of junk into my machine, what on Earth makes you think it’ll give you anything useful, you dimwit?”

Advertisements

Attempt #5: Pricing poverty, lonely extremists and AI tongues

This week:

Are people getting poorer in Rwanda? Since Paul Kagame became president of Rwanda, poverty has fallen radically. As it draws up to another election, his government and the World Bank claims that trend has continued in recent years. Except maybe it hasn’t. Depending on what consumer goods you consider the price for, it may actually have risen – as several academics and some within the World Bank itself think. Which would put a dampener on the World Banks story of success, fuelled by the $100Ms spent in aid every year and Kagame’s bid for re-election (FT). [1]

There are certain thought patterns that might lead you to become a religious extremist. Some researchers think that it’s “the will to fight and die defending sacred values,”.  More specifically, it’s thought there’s a specific part of the brain that becomes more active when you have those feelings. Now there’s a study that claims the same region becomes active, and maybe leads to an increase in extremist thought, when people feel socially excluded.  (NSN)

Maybe you’re a woman diagnosed with rectovaginal endometriosis, a condition where you have excess forms of tissue growing in sensitive parts of your pelvis. Your doctor asks if you’d like to be enrolled in a study on that very condition. You say yes. It turns out they ask you a bunch of questions about your sex life and get their friends to decide if you’re attractive and then publish a paper about it. Well, it happened and naturally, people think this is ridiculous. (Twitter)

Lots of companies are using some form of bot/agent that responds to oral requests from their users (think Alexa/Siri). Getting those services to recognise what a user is saying involves training an algorithm to convert speech to text. The best way to gather data? Record your users and, often without explicit permission, have a bunch of contracted people listen to their conversations and transcribe them. This week Facebook joined the likes of Amazon, Google, Microsoft and Apple doing exactly that. (MIT TR)

Whisky tasting is more of an art than a science. So much so that experienced tasters might be unable to distinguish “true” single malts from knock offs that retail around the world. So how might a distillery protect its brand and sales? Maybe with an electronic tongue that uses the chemical interactions of whisky particles with metallic nano-structures and machine learning to find distinct chemical signatures  that distinguish them from one another. (GuardianNanoscale) [2]

[1] Fun with numbers:

In Attempt #4 (which a reader informs me may have made it into junk folders rather than inboxes), I mentioned that those without technical or scientific literacy are likely quick to attack individuals/organisations on personal grounds with unsubstantiated moral claims rather than technical arguments (specifically about how the inputs to an ensemble model are weighted) when they disagree.

In Attempts #2 and #3 I also mentioned that I liked watching/reading about economists argue because they tend to do so in a “reasonable” way – the kind where the debate seems to carry some sort of weight and the arguments feel more than petty point scoring and like some sort of actual spirited academic debate.

I still think that’s true, I find economic debates satisfying and kind of interesting (again, partially and likely due to my own relative economic illiteracy) but in this case some of that fun is hampered when the very obvious personal motivations start seeping in and presenting the usual boring means for introducing bias.

Like, yes, you can absolutely imagine Kagame pressuring government statisticians to choose a representative basket of goods that shows poverty declining rather than increasing for the purposes of re-election. And that’s not some “third world” corruption problem – in Attempt #1 we saw the Liberal Democrats pulling even less sophisticated statistical tomfoolery for propaganda.

I guess the interesting thing is the other end of this argument and who is representing it. I’m not terribly familiar with the opposition parties of Rwandan politics but it doesn’t appear immediately obvious to me that a select group of individuals in the World Bank, several academics scattered in various institutions or the FT have a strong political agenda to work together to contest these figures on political grounds.

Or to put it more clearly, I’m more inclined to believe that in fact this basket of goods used to indicate prices (and relative poverty) in Rwanda isn’t representative of poverty rates in the way the Rwandan government says it is. Because other than a commitment to formulating an “honest” picture of Rwandan development, what would everyone else’s angle here be? Has the FT somehow been infiltrated by Rwandan opposition party propagandists?

The obvious counterargument to this, and a recurring theme of this newsletter, is that academics and anyone who can effectively put on airs of being “smart” will often kick up a fuss just to look clever. In fact, as we saw in Attempt #2, they will in fact straight up commit fraud to do so. So there is that. But any economists hinging their future careers on this particular issue… I mean, maybe it’s impressive to rebuff the World Bank, but I don’t think it’s such a hot button topic in the academic economics world to win mega career brownie points.

Though if I am wrong, I’d love for someone to write in and tell me so.

[2] Whisky tasting:

If “bimetallic nanoplasmonic” tongues do in fact become a thing, and find use beyond whisky tasting into the realms of wine and other spirits, I might be disappointed that a favourite pastime of mine, arrogantly and perhaps incorrectly accusing most wine/premium spirit tasting and discernment to be a farce, might be irrelevant.

I mean, it seems like the logical step from getting a machine (and machine learning) to detect different kinds of whiskies is also to start telling people which ones they’re likely to enjoy and which ones are “good”. Good might mean distinct from “cheap”, but it could also mean other things. What if we come to replace sentiments like “oak-y, honeysuckle notes” in single malt with the random PCA components (representations in the data of independent and unique chemical components) that are discovered by an artificial tongue and an algorithm?

One thought is that it might start creating “independent” criteria for training whisky tasters. For example, can you repeatedly and accurately detect hints of PCA component 6 in Isle whiskies as compared with Irish whiskies? Another thought, and the more interesting one for me, is taking this whole scheme well beyond whiskey and creating something akin to the digital Red-Green-Blue deconstruction of colours on computer monitors, but for flavours in food and drink.

This is already basically a thing in that we appreciate “sour” and “savoury” as independent flavours and know how to create those flavours with very specific chemicals (like MSG or capsaicin for spice), but some clever scheme of totally orthogonal representations of flavour in chemical space might give us new ways to combine flavours for tastes we’d never even think of.

On the other hand, early attempts at machine-learning derived recipes include “mushroom, strawberry, chicken and pineapple”, so I’m happy to be a late adopter on this one.

 

Attempt #4: Moss piglets, female wit and alternate worlds

This week:

Back in April, an Israeli spacecraft accidentally crashed on the moon. It was carrying sub-millimetre sized animals called Tardigrades that can survive in extreme conditions (low temperatures, lack of water and even vacuums). They’ve likely survived in their “frozen” state and just need a little water to get moving again, and will remain in that state for the next 30 years. Even though hydrating them is unlikely, the space lawyers aren’t happy about it. (BBC)

Machine learning, or “AI” algorithms require lots of labelled data examples to learn how to do what they’re meant to. Labelling the data in the first place requires people, and people are cheap to hire in low-income countries. This sounds, and is, exploitative so some companies are saying “Hey, we’ll pay these guys a little more so you can sleep easier at night”. But none of that addresses the fact that it’s insecure employment, not everyone can do it and it’s still, ultimately, exploitative. (MIT TR)

Humour is good in business settings: it makes you look confident, considerate and engaging. If you’re a man. All other things kept constant, people (both men and women) tend to think it makes women look unprofessional and un-serious even if the jokes and their backgrounds are otherwise identical. Researchers think it’s to do with our preconceptions about how women should behave. (HBR)

[Content warning: Violence] And if your preconceptions about women are really strong and as a man you like going online to talk about how they aren’t sleeping with you and are destroying Western civilisation with their pursuit for autonomy, you might just be en-route to becoming a white supremacist and a mass shooter – anti-feminist sentiment is often a stepping stone to far right radicalisation. And it all comes down to wanting to control wombs. (The Atlantic)

It’s been very hot in the Northern Hemisphere this summer. The obvious conclusion is “Damn, this climate change stuff is a drag” and sure, that’s the likely cause of “unseasonably” high temperatures. But how likely is likely? Scientists are improving methods for attributing the likelihood of heatwaves, droughts and other extreme weather events on climate change. If they can do it in real time, they can show people how climate change is affecting them in the moment. (Nature) [1]

[1] The blame game:

Last week I said that economists, like astronomers, face a major hurdle when it comes to interpreting cause and effect in their respective fields. That’s because whether it’s galaxies, economies or occasionally people, the variable you want to control in order to see its effect on your outcome of interest isn’t always something you can get a handle on (e.g. you can’t set up multiple, actual galaxies with different initial conditions or economies with different consumer demands etc).

One solution to this is running a “natural experiment” (finding places that are nearly the same but differ in your control variable) and seeing how the outcomes differ. The other solution, which I alluded to, was computer simulations.

Like economists and astronomers, climatologists can’t really run experiments with identical multiple civilisations on Earth that have different rates of CO2 emissions to measure whether this summer would have been as hot as it was without anthropogenic climate change. And it’s not an entirely unambiguous process to untangle – the weather is the archetypal example of a complex system. It’s stochastic, non-linear, and things like heat waves and storms can and will arise from all kinds of initial conditions, so it’s almost pathologically difficult to attribute straightforward cause and effect. But, to a degree, you can model bits of it on a computer.

So “attribution science” in climate science tends to focus on likelihood. If I had 2000 identical Earths, how often might a summer like this be warmer on 1000 of those Earths where we messed up the planet like we did compared to the 1000 Earths where we didn’t? And that sort of gives you a number you can do things with. For example, you can estimate that the Day Zero drought in South africa was about three times more likely to have happened with anthropogenic climate change than without.

This seems to me to be a fairly sensible and reasonable thing to do and about as good as these things can get. Of course, if you’re only using one model, it will biased (in the statistical sense) and wrong (in the George E P Box sense), but it looks like researchers are ensembling with a bunch of different models to address that issue (the basic idea here, is that if you have multiple uncorrelated models, you can average over their systematic biases to reduce the overall bias of your final prediction. Of course, when your models are correlated, you end up with models all falsely predicting a healthy mortgage market which then crashes and causes a global financial crisis).

Great as all that sounds I think that if, as the researchers are hoping for, we start using attribution science when extreme weather happens (“This hurricane was four times likelier to have happened with climate change than if it wasn’t happening!”) to stoke public engagement on climate issues, the face of climate skepticism is going to adapt to this too.

Because if there’s one thing skeptics (and in fairness, even many seasoned scientists) rarely seem to dig into it’s statistical nuances associated with predictions. Hell, we can’t even seem to be able to report on the uncertainty of a model or even properly communicate what a standard deviation is in mainstream popular science, without which lots of published research is grossly mis-contextualised. And in the context of the climate, a lot hinges on that very uncertainty (such as the question of “can we fix the climate anyway?”).

Anyway, my prediction is that rather than contesting the decision of how certain climate models have been weighted in the ensemble or arguing about surface reflectance of thawing ice or anything that might require some scientific literacy, skeptics are going to do (or more narrowly target) what’s effective and politically expedient: attacking climate scientists themselves and accusing them of having an agenda.

This already kind of works and if you can’t be bothered to contest the science, it’s an easy (and probably fun) way of pushing back against perceived climate alarmism.

I also mentioned last week in the context of the Harvard balloon experiment that the research group was already planning on how to manage the publicity and political ramifications of their research, regardless of the outcome. The attribution scientists may also want, for very personal reasons, to start doing the same.

Attempt #3: Shiny dust, ’50s values and smart summaries

This week:

If you could dim the sun, maybe climate change wouldn’t happen as quickly and it wouldn’t be so bad. So, a new Harvard project wants to test how releasing plumes of light-reflecting dust from balloons in the upper atmosphere atmosphere dims the amount of light/heat that reaches Earth. Problem is, “geo-engineering” approaches have the potential to make us complacent, or worse, mess up the weather and agricultural yields. And other researchers have previously been suspected of wanting to make a quick buck off these ideas. ( Nature ,  Nature Nature ) [1]

As a teenager, Marcus Hutchins wrote malware that was used to steal bank account passwords. Later, he accidentally stopped an enormous malware attack on the UK and US that could have continued to cost billions. For the latter, he’s basically been pardoned from a 10 year prison sentence for the former (which he claims to regret). People seem broadly in agreement: good. ( MIT TR )

Can an entire species have a “next of kin”? Some argue, yes, and it should very obviously be chimpanzees. The mounting evidence for their human-ness case is so strong, they say we should be preserving not just their habitats but their “cultures” like we do for humans, and fighting for their emancipation. ( Personal )

What if an entire nation adopted the implicit values and endorsed behaviours of US conservatives? Would they pull themselves up by the bootstraps and experience virtually no poverty? The evidence from Japan says… no. ( Bloomberg ) [2]
Profitably video-streaming yourself tends to be easier if you’re a conventionally* attractive women. Except face-modifying filters, like the kind on Snapchat, are now so good practically anyone can emulate the look in real time. Well, until the artificial face falls off. ( BBC )

When a viral outbreak like Zika happens, the scientific output from studying it comes out faster than field workers can realistically keep up with. To help, the process of generating “systematic reviews”, the main way of collecting and summarising relevant papers of a given topic, are being automated. But maybe it doesn’t have to stop there. ( Nature ) [3]

[1] Blue sky thinking: 

Well at the very least, the Stratospheric Controlled Perturbation Experiment, or SCoPEx definitely follows the rigid scientific protocol for tortured acronyms, which is a strong start.

The general idea behind this is you release some shiny dust in the atmosphere, it reflects a bunch of sunlight back into space and that prevents as much heat being trapped in the atmosphere than would otherwise be. So far so good.

The coverage of this story is really odd though. I mean sure, it’s a project proposal, and there’s a lot of money being thrown at it ($3M) so you want to make sure it’s looking at something vaguely promising and that has a value to society. All textbook grant-proposal stuff.  And yet some of the concerns don’t appear to be whether this will work as well as intended but also what happens if it does work.

The general fear about geo-engineering is that if you throw up a bunch of dust into the atmosphere, maybe it blocks out too much sunlight and weird stuff starts happening to crops, cloud formation, wind patterns or whatever. But to understand that in practice, it actually makes sense to fund a small-scale project like this that’s only throwing up hundreds of grams of calcium carbonate and measuring what happens so you can re calibrate your inferences.

The bigger fear here seems to be what happens if this all looks super promising and works fine. Then the implicit concern is that suddenly every government or large company feels like they’ve been given a blank check to release as much CO2 as they like and point at the magic dust balloons, under the guise that they’ll mitigate all the terrible things they’re doing. You also hear the same concerns when people talk about CO2 fixing using plants – that it’ll detract from efforts to actually reduce CO2 emissions.

The consensus view amongst scientists seems to be “Well, you’d bloody well better do both,” but I can kind of see the concern here. In this case though it ends up crafting this very weird narrative where people almost seem concerned to find out the results of an experiment because of what it might reveal, even if it’s a partial solution for the very problem they’re concerned about! And that’s kind of weird in science.

SCoPEx has a committee to think about these sorts of things but it seems like an equally large part of this project will be managing the spin that comes out of any of its results, which seems reasonable. I guess it’s just amusing they’re making contingencies for what happens if their amazing plan maybe works a little too well.

[2] Natural experiments:
I talked a little last week about why I like the way economists argue about things. Topics often overlap with big, weighty ideas about society like political arguments do but the way they talk about it seems a lot more level-headed and evidence driven.

So then there’s the question of how you bring evidence to bear on certain political views. For example, as Noah Smith paraphrases from the National Review: “If people were just to work hard, avoid drugs, alcohol and violence, and stop having children out of wedlock, poverty would be rare.”

The usual scientific approach to testing if a variable affects an outcome is a randomised controlled trial. You pick two representative samples of people, you make one of them follow one or all the traditional conservative values and encourage others to do the opposite and measure their subsequent poverty rates.

Regrettably, research ethics committees are generally quick to put a dampener on experiments that mess with people’s lives. So what do you do when you want to know the outcome of something but you can’t control the variables?

Astronomers also face this issue in that they want to know things about planets and stars but can’t generally control their behaviour. If you want to know about the orbital radii of medium sized planets, you can’t really throw millions of tonnes of space rock out there and watch what happens. Sure, you can run computer simulations but it might not be the kind of gold standard evidence you can draw conclusions from.

But if you’re lucky, there might just happen to be a system out there with the variables at the conditions you’re interested in, as NASA’s exoplanet survey recently found. In that case, all you do is point a big telescope towards it and you’re in business, you can see what happens in practice.

Economists like doing this sort of thing too, and it’s called a “natural experiment”. Sure, you can’t control variables, but you can take a look at where those variables are different and look at the corresponding outcome and hope you can work something out. It’s not quite the same, but it’s often a good first indicator. Like seeing if being downwind of a motorway affects school pupils’ scores due to the air pollution.

Anyway, it turns out Japan is the natural laboratory of social conservatism, and despite embodying all those wonderful old fashioned values, relative poverty (as expressed by the proportion of the population that lives in poverty relative to the whole) is still pretty bad. I could get political here, but instead, here’s a funny comic strip about astronomers and natural experiments .

[3] Automating Science:
Cards on the table, I like the idea of using big data, and natural language processing and all of that stuff to help put together systematic reviews of fast evolving topics. If you can automate the process of finding relevant papers with large sample sizes and use some clever combination of computer automation, crowd-sourcing and expert review to speed things up while the virus you’re trying to research threatens to kill thousands of people, that seems like patently a Good Thing.

But even if all of this machinery and methodology is initially being deployed for the Zika case, it serves as a natural experiment (see what I did there?) for how well this can work in other domains. If it turns out this machine-learning boosted, automatically updating systematic review gathers information more broadly, efficiently and with less bias than the old fashioned way of a some scientists reading a bunch of stuff, then I don’t see why similar techniques wouldn’t be used to create “evolving” systematic review papers in other fields.

On the face of it, this seems like just a time saving tool for scientists. The algorithm finds and, in some sense, “reads” the papers and I read the aggregate conclusion it produces. But that’s the kernel of forming research ideas! It’s often while reading papers, good, bad and ugly, that the mind brews over certain topics and comes up with new strategies inspired by or based on criticisms of others. And really, putting together a systematic review and coming to conclusions based on the aggregated evidence of the papers contained therein is pretty damned close to doing science itself. You just put together an experiment that addresses an informational gap left empty or undecided by the literature.

In fact, this idea has already been used earlier this month by unleashing a machine learning algorithm on thousands of materials science papers to come up with new ideas about what chemical compounds to study. On the one hand, it hints at the original promise of artificial intelligence, synthesising vasts amount of information to generate novel insights that humans would struggle to do as quickly (or perhaps at all).

But cutting humans out of the process, whilst scary for my own employment, also raised big questions about accountability and creativity. Would an AI trained on geo-engineering papers have come up with or endorsed Harvard’s balloon experiment? Hard to say. In a slightly different way, mathematicians seem to be very taken with the idea though and are already working to put themselves out of a job . But I guess being the person who maintains the machine that subsequently does the maths would be a small consolation prize, or maybe even a big one if you’re more concerned with finding mathematical beauty rather than the process of discovery.

Attempt #2: Academic fraud, dodgy graphs and grumpy economists

This week:

Kazakhstan’s government is now intercepting all secure internet traffic within its borders, decrypting it so it can see what users are sending, and then encrypting it again before sending it on. The usual excuses about “protection of civilians” is being used. And as ever, it’s authoritarian nonsense. (ZDNet, MIT TR, BBC)

The European Court of Justice wants a way to detect unauthorised gene edited crops. But some changes to DNA are basically indistinguishable from, y’know, the random mutations that govern all life. Scientists aren’t really sure what to do about it. (Nature)

Some scientists fake data to push their studies through to publication. In fact, the World Health Organisation even makes recommendations to doctors based on these dodgy papers. English Anaesthetist John Carlisle is checking the figures and hunting down the fakers. (Nature) [1]

Machine Learning algorithms can infer your emotions (whether you’re being aggressive or lying) just by looking at your face. That’s because your emotional state can be reliably read from just your facial features. Except that’s not true – the science says otherwise. But it won’t stop lots of “AI” companies from peddling those lies and claiming to be able to see what you’re thinking. (ACLU)

Wealth inequality is a bit bonkers at the moment. US 2020 presidential candidate Elizabeth Warren wants to fix that with a wealth tax. The former Treasury Secretary doesn’t think it will work. Now economists, who want broadly the same outcome, are arguing about the best way to go about it, and the central debate hinges on this – “Can you even tax the rich properly?” (CNBC) [2]

All political parties are bad (or wilfully misleading) at making graphs. So there’s this from the Lib Dem Press Office Twitter:

Image

“For one there’s the truncated Y axis, with no label. Then there’s the fact 7 lines separate the Labour and the Tories’ likely share of the vote, according to a YouGov poll, but only 6 percentage points do. And, perhaps worst of all, somehow a quarter of a bar gap between the Conservative and the Lib Dem vote represents 2 percentage points. It is truly one for the ages.” (FT) [3]

[1] Fake Papers: 

There’s something about scientific fraud that really amazes me. I know, I know, every domain has its assholes but there’s still something really baffling about fraud happening in academic science. If you’re smart and enough to make it as an academic (you presumably obtained a PhD, had the drive to get good references and the social presentability to operate in fancy institutions) and you care a lot about money, fame, prestige or preserving your ego above your principles (whatever they may be), my presumption has always been that one would realise that being an academic is a terrible way to go about getting any of those things. 

Want an easy metric for self worth? Find a job with a high salary. Want to make money more easily in a somewhat less crowded job market? Go into investment banking. Want to be respected? Go into law. And, sure, getting a good job in finance, banking, consulting etc is really bloody hard, but probably about as tricky as landing a good academic job. And those other fields come with way more pay, bigger networks and better job security.

This is not to say scientists are any more virtuous, selfless or humble (hah!) than those in finance, law or consulting, but I do think that if you don’t actually have some sort of personal love of truth and free spirited inquiry, I really don’t see much point in losing out on the benefits offered by so many other forms of employment you can probably qualify for. Sure, there’s the sunk cost fallacy, maybe you don’t want to go back and do a law degree after your PhD, maybe you just went along with a scientific career path because it was the path of least resistance and now you’re stuck in a field you hate and you’ll happily commit fraud to get a leg up and some minor benefits to your career. 

But at the moment you’re willing to forego your scientific integrity, why maintain the farce at all? Unless you’re a senior lecturer or professor at a good university (at which point it seems a little late to have lost the love of your subject), you’re probably having to work quite hard just to stay in the field and you’d be better off finding something more rewarding elsewhere.

In short, I don’t see why if you aren’t wholly (and perhaps irrationally) attached to the silly, romantic idea of uncovering the truth, seeking new facts about the world and all that other jazz, you would want to stay in the field and gather acclaim in that domain in the first place. Why not simply jump ship and make much more efficient use of your loose morals? And for greater returns!

Well, there is this: 

‘Carlisle has developed his own theory of what drives some researchers to make up their data. “They think that random chance on this occasion got in the way of the truth, of how they know the Universe really works,” he says. “So they change the result to what they think it should have been.”’

So, it turns that not only are scientists not more virtuous, humble or selfless, they aren’t even all that good at being dispassionate arbiters of truth-seeking either. But I suppose that’s not really news.

[2] Feuding Economists:  

I think we find political debates tedious because we feel like so much of it comes down to really basic, repetitive logical fallacies. Endless strawmen, ad hominems, ambiguity, you name it. And with a bit of reading around, self denial or just propensity to follow your particular tribe (I politically align solely on the criterion of coolest logo, so Plaid Cymru), you can generally poke holes in your political opponents arguments and it all gets a bit samey.

But economists? They sound much more fun (hear me out). I don’t doubt in large part it’s because I don’t have a firm grasp on economics, but following a twitter spat or a debate between economists just seems much more sporting because both sides seem to do a much better job of sounding reasonable and appealing to, y’know, actual theoretical models or data. But the debates are still about the basic way we organise society in a way that affects people’s lives, so the stakes feel more important than academic debates in other fields about, say, whether supersymmetric particles exist (though that doesn’t necessarily mean more interesting).

And sure, it might then devolve into why a given model or interpretation of data (or maybe even data themselves) are wrong but that’s where all the fun is. Saying “yo mamma fat,” solicits eye-rolls, but saying “yo mama’s waistline is at a positive three sigma deviation from the Gaussian mean for women her age,” and retorting with the idea that there are other variables of interest that you should be marginalising on for a meaningful comparison is another level of debate entirely.

I don’t know, maybe I’m just a sucker for anything that sounds vaguely smart and convincing, but the tension and the drama feels all that more enjoyable for it even if I don’t really understand economics. It’s like Star Wars. The space ships are made up, and even if it’s totally implausible they would make sounds in space, what I care about is that the stakes and the action feels palatable and satisfyingly explosive.

Plus economists say stuff like this on Twitter, which is great: 

“Very cool to attack young academics for doing policy-relevant research. Here is my peace offering @LHSummers: let’s all root for the wealth tax. If it yields less than 80% of our estimated revenue, I give you 10% of my wealth. Otherwise, you give me 10% of yours.”

When’s the last time you heard a politician bet 10% of their personal wealth on the outcome of one of their policies? I don’t even care if this is just grandstanding, this is the kind of “skin-in-the-game” kind of commitment I want to see in debate participants.

Plus, that “sounding reasonable” part isn’t entirely for show. Last week US Senator Alexandria Orcasia-Cortez asked Federal Reserve Chair Jerome Powell about whether estimates of the “natural rate of unemployment” (a rate at which unemployment is so low that it causes ever-rising inflation) might be horribly wrong.

‘She noted, “inflation is no higher today than it was five years ago. Given these facts, do you think it’s possible that the Fed’s estimates of the lowest sustainable unemployment rate may have been too high?”

Powell’s response, to his credit, was as simple and direct as you’ll ever hear from a central banker: “Absolutely.” He elaborated: “I think we’ve learned that … this is something you can’t identify directly. I think we’ve learned that it’s lower than we thought, substantially lower than we thought in the past.”

Powell’s response was commendable, perhaps even groundbreaking; here was the Fed chair challenging decades of conventional economic wisdom. It was a welcome sign of a policymaker’s willingness to question age-old assumptions that have dictated policy and affected millions.’

See, that’s the kind of honesty I can get behind. 

[3] Terrible Political Graphs:

But alas, we can’t stay away from shoddy political discourse forever. Except this time, with graphs!

In response to the bizarre looking graph from the Lib Dems posted above, the FT also had this to add

‘So it was of little surprise to us when the Lib Dem Press Office tweeted perhaps one of the most heinous chart crimes of the year Wednesday. Mathematics clearly doesn’t course through their orange blood…

…In 2014, the Nick Clegg pledged that he would make sure all UK schools follow the core curriculum which, of course, included mathematics. It’s a shame the same standards haven’t been extended to their hiring policy.’

But hang on just a moment. What if in fact the problem is, someone in the press office at the Liberal Democrat HQ is actually too mathematically literate? 

So what if the distance of the bars doesn’t seem to correspond to the actual numbers being reported? That’s just how we’re used to thinking about bar graphs! What if the bars aren’t being plotted in Euclidean space? What if it’s in some bizarre space-time with positive curvature being projected onto a 2-D plane. That would also explain why the y axis isn’t labelled.

Maybe some intrepid mathematician for the Lib Dems is counting using some set that closely resembles but isn’t actually the same as the real numbers? In fact the percentages are suspiciously close to whole numbers, so we already know there’s a level of approximation going on here. Rounding procedures are, in some sense, sort of arbitrary and we don’t know that this pioneer of graph-making is constrained by the common sense rules we learnt in primary school.

I would suggest that the Lib Dems could fix this problem by just swapping all mathematicians currently on staff for physicists but the problem there is you’d end up with a graph in which all the parties had an exactly equal share of votes “to within an order of magnitude”, with enormous error bars. But to me, somehow, that would feel a bit more honest.

See you all next week.

The Ogler’s Guide to Shibuya, Tokyo

Since I was brought up in a city and have travelled to about two dozen of them, I can attest to at least one universal fact about them: cities have rhythms.

They might occur with perfect periodicity or with the irregular percussion of a jazz drum solo but either way, they set the beat to which a city moves.

In London for example, asides from the obvious day/night cycle and the awful bass-thumping from your inconsiderate neighbour’s speaker at three in the morning, the buses come every fifteen minutes, the guards at Buckingham Palace change on the hour and a tourist manages to block your path at least every twenty seconds. Every few months or so, the clouds part, just a fraction, and everyone goes temporarily mad worshipping the big glowing orb in the sky.

shibuya-crossing-923000_960_720
Shibuya Crossing

Continue reading “The Ogler’s Guide to Shibuya, Tokyo”

Ben Feringa’s missing Nobel prize

Ben Feringa is the sort of person you’d call ‘smart’.

Despite being relatively famous in his field, you probably haven’t heard of him. I certainly hadn’t up until recently.

One indication of Feringa’s intellectual chops is that he’s a professor of Chemistry at the University of Groningen. The research group he leads is [correction: unofficially] called the “Ben Feringa Research Group”, which should tell you something, but if that’s not convincing enough, you can peruse the very impressive looking list of awards he’s accumulated over his career, on his website1.

I mean, the man was knighted by the Queen of the Netherlands2, for goodness sake. Yet despite his carefully curated list of accolades, (his website claimed to have been last updated March 2018), there’s one, quite obvious omission.

His 2016 Nobel Prize. Continue reading “Ben Feringa’s missing Nobel prize”