This Researcher Thinks AI Should Be Trained on Things Like ‘Dungeons And Dragons’

Everyone had died – not that you’d know it, from how they were laughing about their poor choices and bad rolls of the dice.

As a social anthropologist, I study how people understand artificial intelligence (AI) and our efforts towards attaining it; I’m also a life-long fan of Dungeons and Dragons (D&D), the inventive fantasy roleplaying game.

During a recent quest, when I was playing an elf ranger, the trainee paladin (or holy knight) acted according to his noble character, and announced our presence at the mouth of a dragon’s lair.

The results were disastrous. But while success in D&D means ‘beating the bad guy’, the game is also a creative sandbox, where failure can count as collective triumph so long as you tell a great tale.

What does this have to do with AI? In computer science, games are frequently used as a benchmark for an algorithm’s ‘intelligence’. The late Robert Wilensky, a professor at the University of California, Berkeley and a leading figure in AI, offered one reason why this might be.

Computer scientists ‘looked around at who the smartest people were, and they were themselves, of course’, he told the authors of Compulsive Technology: Computers as Culture (1985).

‘They were all essentially mathematicians by training, and mathematicians do two things – they prove theorems and play chess. And they said, hey, if it proves a theorem or plays chess, it must be smart.’

No surprise that demonstrations of AI’s ‘smarts’ have focussed on the artificial player’s prowess.

Yet the games that get chosen – like Go, the main battlefield for Google DeepMind’s algorithms in recent years – tend to be tightly bounded, with set objectives and clear paths to victory or defeat.

These experiences have none of the open-ended collaboration of D&D. Which got me thinking: do we need a new test for intelligence, where the goal is not simply about success, but storytelling?

What would it mean for an AI to ‘pass’ as human in a game of D&D? Instead of the Turing test, perhaps we need an elf ranger test?

Of course, this is just a playful thought experiment, but it does highlight the flaws in certain models of intelligence. First, it reveals how intelligence has to work across a variety of environments.

D&D participants can inhabit many characters in many games, and the individual player can ‘switch’ between roles (the fighter, the thief, the healer).

Meanwhile, AI researchers know that it’s super difficult to get a well-trained algorithm to apply its insights in even slightly different domains – something that we humans manage surprisingly well.

Second, D&D reminds us that intelligence is embodied.

In computer games, the bodily aspect of the experience might range from pressing buttons on a controller in order to move an icon or avatar (a ping-pong paddle; a spaceship; an anthropomorphic, eternally hungry, yellow sphere), to more recent and immersive experiences involving virtual-reality goggles and haptic gloves.

Even without these add-ons, games can still produce biological responses associated with stress and fear (if you’ve ever played Alien: Isolation you’ll understand).

In the original D&D, the players encounter the game while sitting around a table together, feeling the story and its impact. Recent research in cognitive science suggests that bodily interactions are crucial to how we grasp more abstract mental concepts.

But we give minimal attention to the embodiment of artificial agents, and how that might affect the way they learn and process information.

Finally, intelligence is social. AI algorithms typically learn though multiple rounds of competition, in which successful strategies get reinforced with rewards.

True, it appears that humans also evolved to learn through repetition, reward and reinforcement. But there’s an important collaborative dimension to human intelligence.

In the 1930s, the psychologist Lev Vygotsky identified the interaction of an expert and a novice as an example of what became called ‘scaffolded’ learning, where the teacher demonstrates and then supports the learner in acquiring a new skill.

In unbounded games, this cooperation is channelled through narrative.

Games of It among small children can evolve from win/lose into attacks by terrible monsters, before shifting again to more complex narratives that explain why the monsters are attacking, who is the hero, and what they can do and why – narratives that aren’t always logical or even internally compatible.

An AI that could engage in social storytelling is doubtless on a surer, more multifunctional footing than one that plays chess; and there’s no guarantee that chess is even a step on the road to attaining intelligence of this sort.

In some ways, this failure to look at roleplaying as a technical hurdle for intelligence is strange.

D&D was a key cultural touchstone for technologists in the 1980s and the inspiration for many early text-based computer games, as Katie Hafner and Matthew Lyon point out in Where Wizards Stay up Late: The Origins of the Internet (1996).

Even today, AI researchers who play games in their free time often mention D&D specifically.

So instead of beating adversaries in games, we might learn more about intelligence if we tried to teach artificial agents to play together as we do: as paladins and elf rangers.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

Common Vitamin Has Been Linked to a Higher Risk of Acne

Vitamin B12 – found in many meat and dairy products and taken as a supplement for better brain function and to stave off anemia – might alter the genetic make-up of facial bacteria, promoting rapid inflammation that’s been linked to the formation of pimples, according to a 2015 study.

As many poor souls are well aware, acne isn’t just for teenagers. In fact, it affects most of us at some point in our lives, with an estimated 80 percent of people between the ages of 11 and 30 around the world experiencing a breakout at some point.

The unluckiest of us will have to deal with the unsightly lumps and bumps well into our forties and fifties, and the worst part is that despite being an incredibly common affliction, scientists don’t actually know much about what causes acne and how to prevent or treat it.

To investigate, Huiying Li, a molecular pharmacologist at the University of California-Los Angeles, and her team decided to focus on high levels of B12 as a possible culprit, based on research from the past six decades that’s linked it to higher instances of the condition.

“It has been reported several times that people who take B12 develop acne,” she told Arielle Duhaime-Ross at The Verge back in 2015.

The first thing they did was identify the molecular pathway that produces vitamin B12 in the skin bacterium Propionibacterium acnes, and compared it in people with good skin, and people with acne-prone skin.

They found that the vitamin B12 biosynthesis pathway in P. acnes was significantly down-regulated in the acne patients as compared to the patients with healthy skin.

Next, they wanted to test the effects of an increased intake of B12 from eternal sources on the levels of naturally produced B12 in these skin bacteria. They gathered 10 volunteers with clear, healthy skin, and asked them to receive a vitamin B12 injection.

As Jennifer Abbasi reported at LiveScience, “The researchers confirmed that the B12 supplement repressed the expression of genes in P. acnes involved in synthesising the vitamin. In fact, the expression of those genes was lowered to levels similar to those of acne patients.”

So it looks like by intaking extra vitamin B12, we could be prompting the bacteria in our skin to slow down on their production of it, which leads to an imbalance that could heighten our risk of developing acne.

According to the paper, which was published in Science Translational Medicine, one of the clear-skinned participants ending up developing acne one week after receiving the vitamin B12 injection.

When Li and her team examined the gene-expression in their P. acnes bacteria, they found that it had gone from looking like that of the other clear-skinned participants to that of their acne-affected volunteers 14 days after the injection.

The team followed up the finding by performing lab tests in which vitamin B12 was added to P. acnes bacteria.

The bacteria responded by producing compounds called porphyrins, LiveScience reported, which are known to promote the kind of inflammation that previous research has linked to the appearance of severe acne.

“It’s exciting that we found that the potential link between B12 and acne is through the skin bacteria,” Li told Duhaime-Ross at The Verge.

Now, before you decide to stop taking supplements and cut down on anything rich in vitamin B12, such as fish, meat, poultry, eggs, and milk – you know, all the delicious things – remember that this is a small study, and there’s not a whole lot to go on yet, except that vitamin B12 looks like an intriguing candidate for further research.

The study was published in Science Translational Medicine.

This New Treatment Could Heal Tooth Cavities Without Any Fillings

Scientists have invented a new product that can encourage tooth enamel to grow back, which means we could finally have a game-changing way to treat dental cavities.

Researchers at the University of Washington have developed a treatment based on peptides – short chains of amino acids, linked by peptide bonds, that aren’t long enough to be considered full proteins.

When applied to artificially created dental lesions in a laboratory setting, the product remineralised tooth enamel, effectively “healing” the lesion.

“Remineralisation guided by peptides is a healthy alternative to current dental health care,” said materials scientist Mehmet Sarikaya.

Tooth enamel is produced by a type of cell called an ameloblast. These secrete the proteins that form enamel while the tooth is still in the gum.

Unfortunately, once the process of forming tooth enamel is complete and the tooth has emerged, our ameloblasts die off. But we continue to lose enamel throughout our lifetime.

“Bacteria metabolise sugar and other fermentable carbohydrates in oral environments and acid, as a by-product, will demineralise the dental enamel,” said dentistry researcher Sami Dogan.

To a small extent, our teeth can be remineralised with the help of saliva, fluoride toothpaste and drinking water additives.

But once there’s a visible cavity on the tooth, it needs to be treated by a dentist – which usually means drilling, and packing the hole with a dental filling.

To develop their new treatment, the team turned to one of the proteins produced by ameloblasts. Called amelogenins, these proteins play a key role in regulating the formation of tooth enamel.

The team designed peptides based on this protein and created a treatment with the peptide as an active ingredient.

They applied it to dental lesions in a laboratory setting and found that it helped form a new mineralised layer to the demineralised areas, integrating it with the enamel underneath.

They also treated similar lesions with fluoride, but only the peptide treatment resulted in the remineralisation of a relatively thick layer – resembling the structure of healthy enamel.

Tests still need to be undertaken to see of the peptide solution works as well as it does in a living mouth as it did in the laboratory.

And for deep cavities that reach the dentine layer underneath the enamel, a filling would still likely be required.

But the researchers believe their product could be sold as part of a preventative everyday tooth care routine, in the form of a toothpaste or gel, to help minimize expensive trips to the dentist for shallower cavities.

“Peptide-enabled formulations will be simple and would be implemented in over-the-counter or clinical products,” Sarikaya said.

The team has published their research in the journal ACS Biomaterials Science & Engineering.

Japan Just Found a Huge Rare-Earth Mineral Deposit That Can Supply The World For Centuries

Researchers have found a deposit of rare-earth minerals off the coast of Japan that could supply the world for centuries, according to a new study.

The study, published in the journal Nature on Tuesday, says the deposit contains 16 million tons of the valuable metals.

Rare-earth minerals are used in everything from smartphone batteries to electric vehicles. By definition, these minerals contain one or more of 17 metallic rare-earth elements (for those familiar with the periodic table, those are on the second row from the bottom).

These elements are actually plentiful in layers of the Earth’s crust, but are typically widely dispersed. Because of that, it is rare to find any substantial amount of the elements clumped together as ex-tractable minerals, according to the USGS.

Currently, there are only a few economically viable areas where they can be mined and they’re generally expensive to extract.

China has tightly controlled much of the world’s supply of these minerals for decades. That has forced Japan – a major electronics manufacturer – to rely on prices dictated by their neighbour.

A new finding that could change the global economy

The newly discovered deposit is enough to “supply these metals on a semi-infinite basis to the world,” the study’s authors wrote in the paper.

There’s enough yttrium to meet the global demand for 780 years, dysprosium for 730 years, europium for 620 years, and terbium for 420 years.

The cache lies off of Minamitori Island, about 1,150 miles (1,850 km) southeast of Tokyo. It’s within Japan’s exclusive economic zone, so the island nation has the sole rights to the resources there.

“This is a game changer for Japan,” Jack Lifton, a founding principal of a market-research firm called Technology Metals Research, told The Wall Street Journal.

“The race to develop these resources is well underway.”

Japan started seeking its own rare-earth mineral deposits after China withheld shipments of the substances amid a dispute over islands that both countries claim as their own, Reuters reported in 2014.

Previously, China reduced its export quotas of rare earth minerals in 2010, pushing prices up as much as 10 percent, The Journal reports. China was forced to start exporting more of the minerals again after the dispute was taken up at the World Trade Organization.

Rare-earth minerals can be formed by volcanic activity, but many of the minerals on our planet were formed initially by supernova explosions before Earth came into existence.

When Earth was formed, the minerals were incorporated into the deepest portions of the planet’s mantle, a layer of rock beneath the crust.

As tectonic activity has moved portions of the mantle around, rare earth minerals have found their way closer to the surface.

The process of weathering – in which rocks break down into sediment over millions of years – spread these rare minerals all over the planet.

The only thing holding Japan back from using its newly found deposit to dominate the global market for rare-earth minerals is the challenge involved in extracting them.

The process is expensive, so more research needs to be done to determine the cheapest methods, Yutaro Takaya, the study’s lead author, told The Journal.

Rare-earth minerals are likely to remain part the backbone of some the fastest-growing sectors of the global tech economy.

Japan now has the opportunity to control a huge chunk of the global supply, forcing countries that manufacture electronics, like China and the US, to purchase the minerals on Japan’s terms.

This article was originally published by Business Insider.

Immune Cells We Thought Were ‘Useless’ Are Actually a Weapon Against Infections Like HIV

A class of self-reactive immune cells long considered useless or even dangerous to our health, could actually be a kind of secret weapon – lying in wait inside our bodies to fight off dangerous infections.

Using mice, researchers in Australia have discovered that so-called ‘silenced’ B cells – lymphocytes that are seemingly dormant, but which when activated can harm our own bodies in autoimmune conditions – can be ‘redeemed’ to attack harmful microbes our immune systems otherwise struggle to fight off.

“The big question about these cells has been why they are there at all, and in such large numbers,” explains immunogenomics researcher Chris Good now from the Garvan Institute of Medical Research.

“Why does the body keep these cells, whose self binding antibodies pose a genuine risk to health, instead of destroying them completely, as we once thought?”

It now looks like we have an answer. Good-now, who 30 years ago helped discover these silenced, self-reactive B cells, says the genetic machinery that makes the cells produce antibodies to attack our body’s own tissues can be adapted to instead combat foreign infections.

What’s so exciting here is that the adaptation essentially represents a new kind of immunity we never knew about.

This finding could pave the way to discovering new vaccines to fight infections like HIV and campylobacter, which hide from our immune systems by effectively mimicking our own biological material.

“This completely changes everyone’s thinking about how the immune system works – and it solves this problem of telling the difference between invaders and self,” Good-now told The Australian Financial Review.

“The idea that you could start with a bad antibody and make it good just hasn’t been in anyone’s lexicon.”

The findings, which so far have been demonstrated in a mouse model, show how DNA mutations of antibody genes in germinal centers – where B cells activate during immune response – reprogram these self-reactive antibodies, making them stop binding to mouse tissue, and increasing their binding capacity to foreign invaders by up to 5,000 times.

“We’ve shown that these silenced cells do have a crucial purpose,” says first author of the study, Deborah Burnett, in a press release.

“Far from ‘clogging up’ the immune system for no good reason, they’re providing weapons to fight off invaders whose ‘wolf in sheep’s clothing’ tactics make it almost impossible for the other cells of the immune system to fight them.”

Now that we know about how this hyper mutation in the germinal center can take place, the researchers are hopeful it could one day lead to new kinds of treatments for dangerous human infections, which are able to evade and elude our bodies’ conventional immune responses.

“The idea that self-reactive cells can contribute to immunity through germinal center redemption may be particularly important in responses to pathogens that cloak themselves in host antigens to avoid immunity,” explain immunologists Ervin E. Kara and Michel C. Nussenzweig from The Rockefeller University, in a commentary on the findings.

“HIV-1 is one such pathogen.”

It’s a remarkable turnaround for a class of immune cells long mistaken for dangerous junk – and one which shows there’s still so much we have to learn about what the immune system can do for us, and how its less than perfectly obvious mechanisms might be leveraged to do us good.

“We now know that every immune cell is precious when it comes to fighting invading microbes,” says Goodnow, “and we’ve learned that the immune system recycles, conserves, and polishes up its ‘bad apples’ instead of throwing them away.”

The findings are reported in Science.

Scientists Stick Needles in Brains to Figure Out What Needles Do to Brains

Neuroscience research is surprisingly brutal – a lot of what we’ve learned about the brain has come from opening up the organ and just poking around. Definitely not an activity for the squeamish.

The best tool for the job? Often, it’s electrodes – a needle-like probe that can be inserted into the brain.

Researchers use electrodes to measure how individual brain cells behave, to give people control over prosthetic limbs, or to develop other technology that interacts directly with the brain.

But there’s reason to question exactly how much these probes can teach us, or if they’re even safe, according an article published April 6 in the Journal of Neural Engineering.

In it, neuroscientists point out that studying a brain with neural electrodes can cause quite a few issues.

Some of these problems are relatively simple, and can be solved through better engineering.

For example, the surfaces of these electrodes that contact, stimulate, or record brain activity can degrade or slip – especially in a conscious research participant.

This can give rise to faulty recordings; a degraded electrode would make it seem like the cell it’s measuring is giving off a weaker signal than it really is. Because we can’t always tell why (or even if) these issues are occurring, it can be difficult for researchers to support their findings.

But the biggest problem the team found goes back to the fact that we actually know very little about the brain. In particular, we don’t know much about how our brain tissues respond to being jabbed with an electrode.

For all we know, the article points out, neuroscientists have spent countless experiments trying to study brain cells that they killed or damaged while inserting the electrode.

There are some solutions out there – for these, the article focuses on areas of the brain’s visual cortex.

For example, scientists can tell whether or not the cells they’re studying are still alive simply by having their research subject look at a visual and seeing if the cells respond.

But even so, the researchers concluded that our technology has caught up to the limits of what we actually know about the brain.

In order for neuroscientists to regain confidence in their experimental findings, we will need to invest in actually sorting out these basic questions of how brains are responding to electrodes and other technological interventions.

Do We Really Only Get a Certain Number of Heartbeats in a Lifetime? Here’s What Science Says

“I believe that every human has a finite number of heartbeats,” the famous quote goes. “I don’t intend to waste any of mine running around doing exercises.”

Contrary to what you might have heard, Neil Armstrong never said this. What’s more, he disagreed with it. But as much as it’s a misattribution, was Armstrong right to argue?

The simplest answer is ‘yes’. There is no strict tally for your ticker, keeping track of your pulse until you’ve used up your allocation of beats. So get out and exercise (after you’ve finished reading this article, of course).

But there is a more complex answer, one that suggests there is at least some kind of relationship between our heart rate and overall life expectancy.

In 2013, a team of Danish researchers published in the journal BMJ Heart 16 years of work on just under 5,200 men.

Of the roughly 2,800 individuals who provided a decent bank of medical data, just over a third had passed away by the end of the trial from various causes.

Matching the sample’s resting heart rates with the rate of mortality led the researchers to believe that higher pulses correlated with a greater chance of dying.

Those with between 71 to 80 beats per minute had a 51 percent greater chance of kicking the bucket during that period than those with a resting rate of under 50 beats. At 81 to 90 beats per minute, that risk was double. Over 90, and it tripled.

In case you’re thinking this was all about fitness or risk of cardiovascular disease, they took those factors into account. Even those who were in otherwise good physical condition seemed to be at risk, so once again, don’t use this an excuse to do avoid going for a run.

This sly nod to a relationship between life expectancy and heart rate extends past individual humans – other animals appear to obey a similar ballpark rule.

Check out this website to get some idea of what your pulse is like when compared with, say, a giraffe’s.

As we’ve seen, humans have on average a heart rate of around 60 to 70 beats per minute, give or take. We live roughly 70 or so years, giving us just over 2 billion beats all up.

Chickens have a faster heart rate of about 275 beats per minute, and live only 15 years. On balance, they also have about 2 billion beats.

We seem kind of lucky. A whale has around 20 beats per minute, and lives only slightly longer than us. It gets just under a billion heart beats.

An elephant? Try 30 beats per minute for around 70 years, giving roughly a billion as well.

The poor little skittish hamster has a rapid-fire pulse of 450 beats every minute, squeezed into three short years. That also adds up to a little under a billion.

This rule isn’t a hard and fast one, given differences of a few million here and there.

But if we look at it in rough orders of magnitude, there does seem to be a heart-wrenching link between living fast and dying young for all creatures great and small.

,

If You Thought Quantum Mechanics Was Weird, Check Out Entangled Time

Where the future influences the past.

In the summer of 1935, the physicists Albert Einstein and Erwin Schrödinger engaged in a rich, multifaceted and sometimes fretful correspondence about the implications of the new theory of quantum mechanics.

The focus of their worry was what Schrödinger later dubbed entanglement: the inability to describe two quantum systems or particles independently, after they have interacted.

Until his death, Einstein remained convinced that entanglement showed how quantum mechanics was incomplete. Schrödinger thought that entanglement was the defining feature of the new physics, but this didn’t mean that he accepted it lightly.

“I know of course how the hocus pocus works mathematically,” he wrote to Einstein on 13 July 1935. “But I do not like such a theory.”

Schrödinger’s famous cat, suspended between life and death, first appeared in these letters, a byproduct of the struggle to articulate what bothered the pair.

The problem is that entanglement violates how the world ought to work. Information can’t travel faster than the speed of light, for one.

But in a 1935 paper, Einstein and his co-authors showed how entanglement leads to what’s now called quantum nonlocality, the eerie link that appears to exist between entangled particles.

If two quantum systems meet and then separate, even across a distance of thousands of light years, it becomes impossible to measure the features of one system (such as its position, momentum and polarity) without instantly steering the other into a corresponding state.

Up to today, most experiments have tested entanglement over spatial gaps.

The assumption is that the ‘non-local’ part of quantum non-locality refers to the entanglement of properties across space. But what if entanglement also occurs across time? Is there such a thing as temporal non-locality?

The answer, as it turns out, is yes.

Just when you thought quantum mechanics couldn’t get any weirder, a team of physicists at the Hebrew University of Jerusalem reported in 2013 that they had successfully entangled photons that never coexisted.

Previous experiments involving a technique called ‘entanglement swapping’ had already showed quantum correlations across time, by delaying the measurement of one of the coexisting entangled particles; but Eli Megidish and his collaborators were the first to show entanglement between photons whose lifespans did not overlap at all.

Here’s how they did it.

First, they created an entangled pair of photons, ‘1-2’ (step I in the diagram below). Soon after, they measured the polarization of photon 1 (a property describing the direction of light’s oscillation) – thus ‘killing’ it (step II).

Photon 2 was sent on a wild goose chase while a new entangled pair, ‘3-4’, was created (step III). Photon 3 was then measured along with the itinerant photon 2 in such a way that the entanglement relation was ‘swapped’ from the old pairs (‘1-2’ and ‘3-4’) onto the new ‘2-3’ combo (step IV).

Some time later (step V), the polarization of the lone survivor, photon 4, is measured, and the results are compared with those of the long-dead photon 1 (back at step II).

The upshot? The data revealed the existence of quantum correlations between ‘temporally non-local’ photons 1 and 4. That is, entanglement can occur across two quantum systems that never coexisted.

What on Earth can this mean? Prima facie, it seems as troubling as saying that the polarity of starlight in the far-distant past – say, greater than twice Earth’s lifetime – nevertheless influenced the polarity of starlight falling through your amateur telescope this winter.

Even more bizarrely: maybe it implies that the measurements carried out by your eye upon starlight falling through your telescope this winter somehow dictated the polarity of photons more than 9 billion years old.

Lest this scenario strike you as too outlandish, Megidish and his colleagues can’t resist speculating on possible and rather spooky interpretations of their results.

Perhaps the measurement of photon 1’s polarization at step II somehow steers the future polarization of 4, or the measurement of photon 4’s polarization at step V somehow rewrites the past polarization state of photon 1.

In both forward and backward directions, quantum correlations span the causal void between the death of one photon and the birth of the other.

Just a spoonful of relativity helps the spookiness go down, though.

In developing his theory of special relativity, Einstein deposed the concept of simultaneity from its Newtonian pedestal.

As a consequence, simultaneity went from being an absolute property to being a relative one. There is no single timekeeper for the Universe; precisely when something is occurring depends on your precise location relative to what you are observing, known as your frame of reference.

So the key to avoiding strange causal behavior (steering the future or rewriting the past) in instances of temporal separation is to accept that calling events ‘simultaneous’ carries little metaphysical weight.

It is only a frame-specific property, a choice among many alternative but equally viable ones – a matter of convention, or record-keeping.

The lesson carries over directly to both spatial and temporal quantum non locality.

Mysteries regarding entangled pairs of particles amount to disagreements about labeling, brought about by relativity.

Einstein showed that no sequence of events can be metaphysically privileged – can be considered more real – than any other. Only by accepting this insight can one make headway on such quantum puzzles.

The various frames of reference in the Hebrew University experiment (the lab’s frame, photon 1’s frame, photon 4’s frame, and so on) have their own ‘historians’, so to speak.

While these historians will disagree about how things went down, not one of them can claim a corner on truth. A different sequence of events unfolds within each one, according to that spatiotemporal point of view.

Clearly, then, any attempt at assigning frame-specific properties generally, or tying general properties to one particular frame, will cause disputes among the historians.

But here’s the thing: while there might be legitimate disagreement about which properties should be assigned to which particles and when, there shouldn’t be disagreement about the very existence of these properties, particles, and events.

These findings drive yet another wedge between our beloved classical intuitions and the empirical realities of quantum mechanics.

As was true for Schrödinger and his contemporaries, scientific progress is going to involve investigating the limitations of certain metaphysical views.

Schrödinger’s cat, half-alive and half-dead, was created to illustrate how the entanglement of systems leads to macroscopic phenomena that defy our usual understanding of the relations between objects and their properties: an organism such as a cat is either dead or alive. No middle ground there.

Most contemporary philosophical accounts of the relationship between objects and their properties embrace entanglement solely from the perspective of spatial non-locality.

But there’s still significant work to be done on incorporating temporal non-locality – not only in object-property discussions, but also in debates over material composition (such as the relation between a lump of clay and the statue it forms), and part-whole relations (such as how a hand relates to a limb, or a limb to a person).

For example, the ‘puzzle’ of how parts fit with an overall whole presumes clear-cut spatial boundaries among underlying components, yet spatial non-locality cautions against this view. Temporal non-locality further complicates this picture: how does one describe an entity whose constituent parts are not even coexistent?

Discerning the nature of entanglement might at times be an uncomfortable project. It’s not clear what substantive metaphysics might emerge from scrutiny of fascinating new research by the likes of Megidish and other physicists.

In a letter to Einstein, Schrödinger notes wryly (and deploying an odd metaphor): “One has the feeling that it is precisely the most important statements of the new theory that can really be squeezed into these Spanish boots – but only with difficulty.”

We cannot afford to ignore spatial or temporal non-locality in future metaphysics: whether or not the boots fit, we’ll have to wear ’em.

Elise Crull is the assistant professor in history and philosophy of science at the City College of New York. She’s co-author of the upcoming book “The ‘Einstein Paradox’: Debates on Non-locality and Incompleteness in 1935”.

 

10 Weather Myths Everyone Gets Wrong

Let’s get the difference clear between ‘weather’ and ‘climate’.

Not everyone is a weather enthusiast. But ever since I was old enough to start forming memories, I’ve been fascinated – even obsessed – with weather.

For my seventh-grade science fair project, I predicted the weather for a week using nothing but homemade materials, including a simple barometer and hygrometer.

I was pretty darned accurate. I thought to myself: If I could figure out if it was going to rain or snow, or that the wind would soon be kicking up, with just basic tools, couldn’t everybody?

In fact, the only reason I’m not a meteorologist is because I hated my college physics class.

Now that I’m an adult, I’m a little more realistic about other people’s interest in weather. Most people don’t have any. But still, I’m often dismayed by how little the people around me know about basic weather facts.

While my knowledge of weather has only deepened over the years, it seems to me that most Americans make no effort to comprehend even the most basic concepts about our climate, the atmosphere and how it all works.

As a result, I often find myself “rain-splaining” the simplest stuff.

Sure, not everyone gets what the Greenland Block is, or how the North Atlantic Oscillation might affect the upcoming weekend, but I think we can all agree that there are simple concepts every layperson should know about our weather.

I, therefore, present the 10 weather facts that (I think) every layperson should know:

1. Hail is not sleet, and sleet is not freezing rain.

Whenever it sleets in winter, and someone says “Hey, it’s hailing out,” my skin really crawls. And when the sleet pinging on the window is mistaken for freezing rain, I can’t help but feel that our education system has let us all down.

Sleet is a partially melted snowflake that freezes into ice pellets before it hits the ground; freezing rain is rain that freezes when it hits the surface; and hail forms during intense thunderstorms, usually during warmer months.

2. Weather and climate are not the same thing.

Rule of thumb: When it’s unusually cold for a week where you’re located, that’s weather. When it’s warmer than normal over the entire planet for a period of years, that’s climate. There’s a reason it’s called “climate change” and not “weather change.”

There are sayings/metaphors to help you here:

Climate is what you expect, weather is what you get.

Climate is your personality, and weather is your mood.

3. What causes wind.

To paraphrase Meghan Trainor, it’s all about those pressure gradients, those pressure gradients, those pressure gradients.

You see, when high pressure gets too close to low pressure, like two people crammed on a Metro train, the air needs to move somewhere, so it goes from high pressure to low pressure.

4. When we say ‘It’s humid,’ we really mean ‘It’s relatively humid.’

The percentage of humidity in the air doesn’t mean much unless you know what the temperature is.

Hot air holds a lot more moisture than cold air, so 50 percent humidity at 90 degrees is a much bigger deal than 50 percent humidity at 40 degrees.

5. It’s never too cold to snow.

Next time you hear someone say “It’s too cold to snow,” ask them: Why is it so white in Antarctica?

6. You can still get sunburn when it’s cloudy.

Believe it or not, as much as 80 percent of the sun’s UV rays can penetrate cloud cover. Okay, so maybe you can apply 20 percent less sunscreen.

7. Forecasting the weather is pretty hard.

Blaming the weatherperson for a botched forecast is like blaming an odds maker when your team loses.

Predicting weather isn’t an exact science, and when you utter, “but it was supposed to rain today,” you’re basically saying “I don’t know what the word ‘prediction’ means.” There’s no “supposed to” in weather.

It’s all about probability, and chaos plays a large role in what kind of weather occurs.

8. Hurricanes, tropical cyclones and typhoons are all the same thing.

And they’re all different from tornadoes.

9. A snowstorm isn’t a blizzard unless it’s a blizzard.

A heavy snowfall isn’t a blizzard.

The National Weather Service defines a “blizzard” as heavy snow with winds in excess of 35 mph and visibility of less than a quarter-mile for more than three hours. Also, please don’t use “blizzard,” as a verb, as in “it’s blizzarding outside.”

10. There’s no such thing as “heat lightning”.

It can’t be so hot that lightning spontaneously happens, though that would be pretty cool. What you think of as “heat lightning” is really lightning that’s coming from a thunderstorm far away.

And get the spelling right. It’s not “lightening,” which “involves ladies’ plumbing and pregnancy and things we don’t need to be talking about here,” quipped James Spann, the broadcast meteorologist in Birmingham, Alabama.

In summary, the world would be better place if more people could correctly talk about the weather.

2018 © The Washington Post

 

 

, ,

Facebook Has Data on You, Even if You’re Not on Facebook

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear.

“Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post.

To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.

It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.

Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the specter of regulation will continue to raise its head.

Ideally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.