Earth Week 2015: How you can help

Every year, we celebrate Earth Day on April 22 to mark the anniversary of a movement that started in 1970. The founder, Gaylord Nelson, then a US Senator of Wisconsin, thought of the idea after the 1969 massive oil spills in Santa Barbara, California. Inspired by the student anti-war movement (much of which started here at Berkeley), he realized that by introducing sustainability into the public conscience, he would be able to force politicians to pay attention to environment protection. As a result, on the 22nd of April, thousands across the nation took to the streets to raise awareness about sustainability, and hundreds of protests were organized. The movement lives on today as Earth Day, and, more recently, has been extended to Earth Week.

You don’t have to plant a forest, or save the whales, to make a difference this Earth Week. Starting small can make a tremendous difference if everyone pitches in. Here are some ways you can help:

1. Cancel your paper bills and switch to online bills. This can save 23 pounds of wood and 29 pounds of greenhouse gas emissions every year.

2. Rather than visiting a large grocery store chain, buy locally produced sustainable food.

3. Get into the habit of carrying around a reusable mug for coffee or tea. This way you’ll always have it handy whenever you need a pick-me-up.

4. Go vegetarian once a week. Did you know that it requires around 2,500 gallons of water to produce one pound of beef? Considering that California is in a drought, you can really help out by going meat-less as often as possible.

5. Take shorter showers, and skip baths entirely.

6. Open your windows and turn off the lights! You’ve probably heard this one before, but it can’t be said enough. Since the days are getting longer now, there’s no reason your lights should be on between the hours of 9 in the morning and 6 in the evening.

7. Start actively recycling and composting. It can be confusing knowing exactly what to put in each different trash bin, but this post from the Daily Clog can help you out with that.

8. Reevaluate your shopping choices: there are so many brands available to us, and as students, we generally pick the cheapest one. However, there’s always a way to find a balance between price and sustainability, so do some research to find the products that are the least damaging to the environment.

9. Take reusable bags when you go grocery shopping. Grocery shopping for students is a whole process, so plan it out so that you have reusable bags with you when you go.

10. Share and discuss! Share these ideas with others, and raise awareness about the environment, sustainability, and helping out in your community.

Posted in BSJ Blog, BSJ Staff | Tagged , , , | Leave a comment

Editor’s Picks

Light’s dual nature as both a particle and a wave has confused us all since the theory was proposed. For the first time, scientists have captured a photograph of light behaving as both a particle and a wave, using electrons to image the light.

Ever wonder why you really can’t eat just one potato chip? In this investigative journalism piece, Michael Moss explores the science behind addictive junk food, gathering research from multiple interviews, observations, and scientific studies.

If you’ve ever heard an a cappella performance (Pitch Perfect, anyone?) you’ve heard someone beatboxing. The range of sounds that the human voice can produce is truly amazing. This article explores some of the mechanisms behind the phenomenon, and how this may elucidate the processes in human communication.

Posted in BSJ Blog, BSJ Staff, Editor's Picks | Leave a comment

Editor’s Picks

After a hiatus of nearly a year, Berkeley Scientific Journal is proud to announce that our blog is back! Our aim is to provide a platform for young scientists to discuss issues they are passionate about, and share their thoughts with the public. “Editor’s Picks” is a new series of posts that will regularly feature great science journalism from all around the web.

An opinion piece, “If you want people to get vaccines – then listen” written by David Litt in Berkeley Science Review discusses the trending bias against vaccination, and why that may be the case. Litt explores some of the reasons behind this fear, and provides some great insight on how to start a dialogue about vaccination.

Art meets science in this stunning series of images from Colin Salter’s book “Science is Beautiful: The Human Body Under the Microscope” which provide a great way to visualize some of the more abstract scientific concepts we hear about.

In honor of yesterday, the Pi Day of the century, Manil Suri’s “Don’t Expect Math to Make Sense” tries to grasp what pi truly represents, and how it manifests itself all around us. The enigma surrounding pi begs the question, is the universe more complicated than we could ever imagine, or is it maddeningly simple?

Posted in BSJ Blog, BSJ Staff, Editor's Picks | Tagged , , | Leave a comment

The Reading Revolution

For years scientists have asserted that language is the one characteristic that sets humans apart from animals. The ability to speak and communicate is believed to have emerged around 50,000 years ago, along with the development of tools, and the increase in brain size. Scientists have identified the Broca’s and Wernicke’s regions as associated with language, but the ability to read is a little more perplexing. The written language has not been around long enough to influence evolutionary changes, and yet we know that something happens to our brain when we read.

A few years ago, Stanislas Dehaene, a cognitive neuroscientist at the Collège de France, teamed up with colleagues to conduct a study on 63 volunteers – 31 who had learned to read in childhood, 22 who had learned as adults, and 10 who were illiterate. What he found is that those who could read demonstrated a stronger response to the written word in several areas, including areas in the left lobe of the brain associated with spoken language.

However, that’s not all. Recently it was found that reading doesn’t just activate regions like Broca’s and Wernicke’s, but also, other areas associated with the content of what you are reading. A word such as ‘cinnamon’ activates areas of the brain associated with smell. Apparently, the brain doesn’t make much of a distinction between reading about something and actually experiencing it. Reading tricks the brain into thinking it is doing something it’s not, which is called embodied cognition. As we read more and more, the experiences and stories we absorb are used by the brain to understand emotions and social situations, and the brain is able to construct a ‘map’ of others’ intentions, called “theory of mind.”

An even more recent study published a few months ago involved testing 19 subjects as they read the book Pompeii by Robert Harris. After conducting fMRI scans on the subjects over the course of 19 days, it was concluded that after the reading, there was an increase in connectivity between the left angular/supramarginal gyri and the right posterior temporal gyri, regions which are associated with perspective and comprehension. Though these effects seemed to peak soon after the reading, and faded with time, more permanent changes were observed in the bilateral somatosensory cortex. Despite the implications of the study, people were too quick to publicize these results.

It’s easy to make the claim that x changes the brain in y number of ways, but this is a simplification of a very complex system that we still do not completely understand. While it may have been claimed that a certain part of the brain lights up during a specific activity, most of the brain is already busy with activity – a scientist can really only observe additional activity to that area. Not only did the study have too small a sample size and neglect to include a control group, but it also failed to make a clear distinction between the situations that the subjects were experiencing. How did they know that it was the reading that was making these changes? It is possible that these changes occurred due to the testing environment that the subjects were sharing for those 19 days.

Of course, I’m sure that the researchers had other methods to verify that the observed activity correlated to reading, but you can see why I’m not impressed by their conclusions. It’s been fairly obvious for a few years now that reading is workout for the brain and has long-lasting benefits that go beyond the basic language acquisition skills – there had to be a reason my parents always told me to turn off the TV and go read a book.

I don’t think we shouldn’t be asking if reading rewires the brain. Maybe, instead, we should be asking: does the way I read affect my brain? Does the way I process information change if I were to read a paperback versus a digital version?

Before 1992, studies showed that people read slower on screens, but since then, results have been more inconclusive. At a surface level, screens can drain mental resources and make it harder for us to remember things after we’re done. People also approach technological media with a state of mind that is less conducive to learning, even if it is subconscious. Reading a paperback, sometimes, just feels more real – though we may understand writing and language as abstract phenomena, to our brains, reading is part of the physical world.

As mentioned before, the brain is not hardwired for reading – the written word was only invented around 4,000 BC, and since then, our brains have had to repurpose some of its regions to adapt. Some of the regions excel in object recognition, to help us understand how different line strokes, curves, and shapes of a letter correspond to a certain sound, or how these letters, when joined together, can create a word. There are also regions that, perhaps more importantly, can create a physical landscape when reading a text, just in the same way that we can construct representations of terrain, offices, or homes, in our minds. When remembering certain information from a text, we often remember where in the text it appeared; in my copy of Pride and Prejudice, I can remember very clearly that Darcy professed his love for Elizabeth in the middle of a left-hand side page.

In this context, paper books have a more obvious topography than virtual books. A paperback is a physical, three dimensional object, whereas a virtual book is just that – virtual. A paperback has a left side and a right side, and eight corners with which the reader can orient himself. One can see where a book begins, and where it ends. One can flip through the pages in a book, and gauge its thickness. Even this process of turning pages creates a lilting rhythm, leaving a ‘footprint’ on the brain.

Screens and e-readers, on the other hand, interfere with the brain’s ability to construct a mental landscape. Imagine if you were using Google maps, but you could only use it in street view, walking through each street one at a time, unable to zoom out and see the whole picture at once. This is similar to what we experience when trying to navigate virtual documents.

Additionally, e-readers interfere with two key components of comprehension: serendipity, and a sense of control. Readers often feel that a specific sentence or section in a book reminds them of a previous part, and they want to flip back to read this part again. They also like to be able to highlight, underline, and write notes in a book. Thus, while reading a paperback involves the use of tactile, auditory, visual, and olfactory senses, virtual text only requires the use of one: visual.

Erik Wastlund, a researcher in experimental psychology, has conducted rigorous research on the differences between screen and paper reading. In one experiment, 72 volunteers completed a READ test (a 30 minute Swedish-language reading comprehension test). People who took the test on the computer scored lower, on average, and reported feeling more tired than those who took the test on paper, showing that screens can also be more mentally and physically taxing.

The problem is that people take shortcuts when reading on a device, such as scanning, and using the search tool to locate specific keywords, instead of reading the entire document at once. They are also less likely to engage in metacognitive learning regulation when reading on screens, which involves strategies such as setting specific goals, rereading certain sections, and testing oneself as to how well the material has been understood along the way.

So far, e-readers have been trying to copy the paperback: e-readers reflect ambient light just like books, there are ways to bookmark and highlight text, and some have even added a ‘depth’ feature, which makes it seem like there are piles of pages on the left and right sides of the screens. Even so, many will agree with me when I say – it’s just not the same.

There are certain advantages to virtual text and media presentation that have not been fully realized yet. There are a few apps on the marketplace which are trying to revolutionize the way we take in information, such as Rooster, which divides books into manageable 15-minute sections to read on the way to the office, or Spritz, in which content is streamed one word at a time, working around the Optical Recognition Point of the eye. And yet, content production is still revolving around the same models that have been in circulation for hundreds of years.

Learning is the most effective when it engages different regions of the brains, and connects different topics. Some tools have the right idea – the popular Scale of the Universe feature uses the scroll bar to communicate an idea that could not have been done as effectively on paper – but interactive media still hasn’t reached its full potential.

So maybe people have the wrong idea with trying to replicate the experience of a paperback. Maybe we should be heading in a completely different direction.

Image sources: featured photo

Posted in BSJ Blog | Tagged , , | Leave a comment

MIT Scientists Produce Hybrid Material

According to a paper published in Nature Materials, Massachusetts Institute of Technology (MIT) scientists recently have incorporated inorganic matter with living cells to develop a material that has properties of living and non-living things using E.coli bacteria.

Led by Timothy Lu, a professor of electrical engineering and biological engineering, MIT researchers used E.coli bacteria to produce biofilms (group of microorganisms where cells stick on a surface through adhesion) that have “curli fibers” which are comprised of protein peptides called CsgA. CsgA help bacteria attach to surfaces, allowing them to grip non-living materials.

(E.coli bacteria, the substance used to produce the biofilms that have the curli fibers. Wikipedia)

MIT News revealed that the researchers manipulated these E.coli cells to create various curli fibers. Since CsgA can only be produced under specific conditions, the researchers had to create a certain genetically engineered strain that contains a molecule called AHL. With AHL, the cells secreted CsgA and by controlling the amount of AHL present, researchers were able to control production of curli fibers. In addition, the researchers wanted to create curli fibers that can specifically grab onto gold nanoparticles, and thus produced CsgA with peptides that contain clusters of the amino acid histidine. This particular CsgA can only be produced with a molecule called aTc. The researchers manipulated amounts of AHL and aTc so that sufficient amount of curli fibers could grab onto the gold nanoparticles.

In the end, the scientists were able to produce a network of gold nanowires, which can conduct electricity.

The scientists also produced another type of curli fiber that attached to a substance coined the name SpyCatcher. The researchers put quantum dots, which are semiconductor nanocrystals, and the curli fibers were able to grab onto the quantum dots. The scientists grew the different strains of E.coli (which contained the slightly different curli fibers) and were able to create a material of both gold nanoparticles and quantum dots.

“It’s a really simple system but what happens over time is you get curli that’s increasingly labeled by gold particles. It shows that indeed you can make cells that talk to each other and they can change the composition of the material over time. Ultimately, we hope to emulate how natural systems, like bone, form. No one tells bone what to do, but it generates a material in response to environmental signals,” Lu says according to MIT news.

The recent discover of synthesizing hybrid materials that integrate living material with everyday applications gives hope for future developments such as future computers, biosensors, and biomedical devises. This research shows that perhaps a system, maybe a smartphone one day, can recognize its own defects and respond by repairing itself. Solar cells, which can converts light into electricity, may also potentially use the self healing quality of living cells to develop solar panels that can repair themselves by self command.

Now the researchers want to further explore this mechanism by coating the biofilms with enzymes, which catalyze the breakdown of cellulose and see if it could convert agricultural wastes to biofuels. Who knows what else is in store for these hybrid materials!

 

 

Posted in Uncategorized | 2 Comments

Curing the Silicon Addiction

Every few years, we demand that the next iteration of phones, computers, and tablets be faster than the last. What we fail to think about is that each new iteration requires a technological innovation, someone in a research lab has to create a new and better way of making CPUs. At its most basic level, this is attempting to pack more transistors onto the same chip. By Moore’s Law, every 2 years the number of transistors that gets packed onto one chip doubles. Although this isn’t a law set into nature, it has become a benchmark for the processor industry, and everyone expects that it will hold. This has worked for the past 30 years, but we are fast approaching the limit of how small traditional transistors on silicon can get. We can buy MOSFETS (the transistor architecture used in all modern processors) that are just 32nm across. For perspective, that means that 2.6 trillion transistors could fit in the palm of your hand.  What happens now, can transistors get much smaller as they are now? What will processors look like in 20 years? In short, the future can’t be silicon based.

Moore’s Law, showing transistor counts doubling every 2 years, Wikipedia
Moore’s Law, showing transistor counts doubling every 2 years, Wikipedia

As transistors get smaller and smaller, many problems arise if we use traditional architectures on silicon. Firstly, there is a “leakage current” that occurs when the “off” state isn’t completely off. This is a bigger problem with small MOSFETS where the threshold voltage (the voltage where it switches from “on” to “off”) is very small and can barely be separated from random thermal noise. Even if the leakage current is a modest 100 nano-Amperes ( ) per transistor, this adds up quickly, and for a modern cell phone, this creates a current of 10 Amperes, which would drain a cellphone battery within a few minutes. As transistors get smaller, this leakage current becomes even more of a problem, as other effects, like quantum tunneling, come into play. The distance between the gate and oxide can be so small (up to ~ 2nm), that electrons just “tunnel” across the junction, increasing the leakage current. This current also produces heat, which processors have to dissipate. All in all, the more transistors you have, the smaller they are, and the harder they are to deal with.

The evolution of MOSFET architecture, Nature
The evolution of MOSFET architecture, Nature

To combat these problems, the next generation of processors have complex geometries to minimize these effects. Intel has produced 3d transistors which have crazy looking geometries that fix this problem, at least temporarily. But what happens when even these are not sufficient, what does the future hold?

In the future, we need to cure our addiction to Silicon transistors. The more complex geometries have their own problems, and can’t be considered reliable to arbitrarily small dimensions. If we want to continue Moore’s law for years to come, we need new exotic materials and topologies. There are some new technologies that hold promise.

One of the most promising is the Carbon Nanutube Field Effect Transistor (CNTFET). Carbon Nanotubes would be placed on a silicon substrate, and plated with metals to be used as a transistor. The nanotube is a much better conductor than copper, causing fewer heat dissipation problems. It also doesn’t have the same problems of threshold voltage and leakage current, so it can be scaled much easier than traditional silicon transistors. IBM has demonstrated a computer using 10,000 of these transistors, and researchers at Stanford, and other schools continue to work on this new topology. The basic technology is here, but it needs to be scaled and made reliable to meet consumer and commercial demand.

A depiction of a simple CNTFET, Infineon
A depiction of a simple CNTFET, Infineon

Another solution to the silicon addiction lies in using one of the problems themselves. The 1973 Nobel Prize in Physics was awarded to Leo Esaki for his invention of the tunneling diode and discovering the electron tunneling effect, but only in the last few years have we been able to make a transistor out of it. This is a very strange effect where, if an energy barrier is small enough, an electron can simply pass through it. It can be thought of like this: in the classical approach, to roll a ball over a hill, you have to roll it up and then roll it back down, the energy of the system is the same, but you needed the extra energy to go over the hill. With quantum tunneling, the ball would be able to pass through the hill if it was small enough. A few teams have been able to make this approach work with different materials like aluminum gallium and mixes of indium, gallium and arsenic. These “TFETs” (Tunneling field effect transistors) have few of the problems of traditional silicon transistors, they have little leakage current, their threshold voltage is stable, and they don’t heat up significantly.

A depiction of how quantum tunneling works. In the classical approach (top) the electron cannot pass through the energy, but with the quantum approach (bottom) the electron has the ability to pass through, IEEE
A depiction of how quantum tunneling works. In the classical approach (top) the electron cannot pass through the energy, but with the quantum approach (bottom) the electron has the ability to pass through, IEEE

These are only a few of the promising technologies in the field of transistor architecture. To solve the problems we have with our current transistors, we will need to kick the silicon addiction and adopt a novel technology. It is left to be seen which technology will be adopted, and what the next generations of computers will look like.

 

 

 

 

 

 

 

Posted in BSJ Blog | 1 Comment

Reconstructing Memories

On September 11, 2001, when I was seven years old, I sat in an elementary school classroom, watching footage of a plane crashing into the Twin Towers on a small television screen. My mother tells me she also remembers exactly what she was doing when the world found out about Princess Diana’s death. Nearly everyone in the country who has lived through the 9/11 attacks will remember what they were doing the moment the news broke out. Most people have what Dr. Karim Nader calls “flashbulb memories” of what they were doing when something significant happened, but as vivid as these memories can be, psychologists find that they are often surprisingly inaccurate.

As it turns out, my memory of the 9/11 attacks is almost entirely wrong, because the footage of the plane crashing into the towers wasn’t even aired until the following day. But I’m not alone – a 2003 study of college students found that 73% had the misconception that the footage was shown that same day. The problem is that it is nearly impossible for humans to remember memories without altering them in some way. Momentous occasions like 9/11 are especially susceptible because we replay them over and over in our minds, often in conversation with other memories, and the memories of other people we interact with.

Recording a memory, as Eric Kandel (Nobel Prize winner in Physiology or Medicine) found, requires adjusting connections between neurons. Over five decades of work, Kandel took a reductionist approach by using animal models. Though others were skeptical, he argued that there was no fundamental difference between the neurons and synapses of a human and a fly, and studied nerve connections in a giant marine snail, Apylsia. He shared that memory storage has two phases: short-term and long-term. Long-term memory differs from the manufacturing of short-term memory because it requires synthesis of new proteins and expansion of docks. For a memory to last longer than a few hours, it must literally be built into the brain’s synapses. Kandel thought that once a memory is constructed, it is stable and can’t be undone.

Recent experiments, however, tell a very different story. Psychologist Alain Brunet tested patients suffering from PTSD, who had each experienced a traumatic event a decade earlier in a therapy session. Nine patients took a propranolol pill (a drug that inhibits a neurotransmitter norepinephrine and is intended to interfere with memories), while ten took a placebo pill. Each patient read aloud from a script based on a previous interview describing their traumatic experience. A week later, the patients listened to a recording of themselves reading the scripts. What transpired was that those who had taken the propranolol pills were calmer – there was a smaller uptick in the heart and they perspired less. The treatment didn’t actually erase the memory – it just changed the quality of that memory. This has important implications for clinical treatment for those with nervous disorders.

So why are memories susceptible to change? Why is something that came about as an evolutionary advantage so unreliable? Dr. Karim Nader speculates that reconsolidation could be the brain’s mechanism for recasting old memories in a new light. Adjusting our memories is what keeps us from living in the past.

But that’s not all – new research from the Emory University School of Medicine shows that it is possible for some information to be biologically inherited through chemical changes in DNA. Not only do memories have the capacity to change and evolve, but they also seemingly have the ability to alter our genetic makeup.

In the study, mice were trained to fear the smell of a compound, acetophenone, by exposing the mice to the smell and then treating them with a mild electric shock (it had already been established in a previous study that this kind of learning significantly changes the structure of olfactory neurons in mice). Ten days later, the mice mated, and it was observed that neurological changes occurred in both the children and grandchildren, even when the offspring were kept separate to avoid the effects of behavioral changes. The offspring proved to be 200 times more sensitive than other mice to the smell of acetophenone. After studying the sperm of the offspring and the parent, the part of the DNA responsible for sensitivity to the particular scent was found to be more prevalent.

It is unclear how this fear was passed down, or even how it came to be imprinted in the parent’s DNA in the first place. However, it is possible that these findings could explain irrational phobias. A fear of spiders may be an inherited defense mechanism present in a family’s genes by virtue of an ancestor’s frightening encounter with an arachnid.

It is still too early to tell what the greater implications of this breakthrough could be. What we do know is that the old notion that memory, and by extension, learning, is static, is entirely wrong. Not only are we constantly constructing and reconstructing memories through neuron connections and synapses, but our memories are also changing our internal structure by influencing the way we think, and the things we choose to remember. As our memories change, the way our bodies react changes, which in turn affects our memories again. It’s a spiraling cycle that never ends.

Posted in BSJ Blog | Tagged , | 1 Comment

Are You Nothing More (Or Less) Than A Soft Machine?

‘Man as Industrial Palace’ 1926 Fritz Khan

There is something tantalizingly romantic to me about the objectivity of science. There is something about how the structure of a tail of a twirling galaxy, and that of a hurricane hurdling around its eye, is fundamentally the same. One could even say these entropic laws provide the crystal resolution of an inevitable architecture at any, and every scale.

The more I let this sink in to my ever wavering understanding of what it means to exist in an empiric landscape, the more I become haunted by what it means – to be. If you choose survival of genetic material as the final sum, you will find every variable manifested as an organisms behavior will add up to reach that sum.

How would we as humans fall outside this equation? Yet even as fields like neuroscience disprove century old Aristotelian views that mind is separate from body, I still get the sense that most of us hold that our consciousness, our character, our souls are indeed some intangible entity separate from the physical world. But what if they weren’t.

‘Visual Field’ XKCD

What if we are simply machines programmed by physics and chemistry and biology which churn out outputs like society, which further program us as individuals for the sole output of survival and reproduction. To illustrate the synthesis of external information into the fabric we call consciousness, take one of my favorite subjects, color. From physics we know that what we perceive as a color is the reflected wavelength of visible light hitting an object. From chemistry we know that the pigments that determine what wave lengths are absorbed by the object, do so because the specific length, strength, and angle of molecular bonds. From biology we can see how the reflected light progresses through the incredible optical structures of our eyes. From neuroscience we can explain how these electrical signals are translated through the optical nerve to the occipital lobe in our brain for us to finally perceive the color. We can even take it a step up to psychology. How we respond to certain colors, for instance, the allure of red. Red cheeks and red lips signify reproductive health, which lead to a certain individual with these qualities being considered attractive. Even sociologically, the color red has significant meaning in everything from aboriginal face paint to marketing campaigns, due to its subliminal arousing response. Perhaps this seemed tangental, but I’d like to think of it as a micro example of the linear progression of matter to our sense of being.

Then there are other examples in evolutionary biology. We have internal clocks synchronized to the rotation of the planet which ensure our body restores the energy it requires to function. We fear heights, spiders and enclosed spaces because they threatened our humanoid ancestors.  We’ve evolved to enjoy the taste of sweet and fatty things to better recognize the nutrition that will fuel our bodies. We’ve evolved  memory in order to recall successful locations of food and shelter. We’ve developed “love” as a mechanism to reproduce and then raise offspring. There are even statistically significant trends of people resembling their romantic partners, because we have an innate attraction to those who are similar to us due their presumably similar genetic material. We develop the ability to empathize with others in order to live in a community. This allows for the specialization of skills which improves the overall chance of survival for the entire community. We devise political and economic systems to navigate those societies. We fight and argue to establish dominance over potential mates, territory, resources (all survival tools). The list goes on. If you begin looking through the lens of an evolutionary biologist, you may find yourself unable to stop.

So maybe we are machines.

Yet with this logic, I have to wonder, why aren’t we perfect. Why aren’t we flawless adaptors to our changing environment? Why are there some behaviors that don’t seem to be at all productive for survival?

My favorite example is unrequited love.

In this mechanism that plagues a majority of the human population at some point in their lives, an individual will sustain intense grief, loss, anger, and depression over an unsuccessful partner (i.e. mate). If we were perfect machines, with a single output for all inputs, this behavior would be completely useless. In this (sometimes drawn out) duration of self indulgent, self induced mourning we could be seeking out other more suitable mates to continue the quest of reproduction.

It was here where I began to synthesize the hypothesis that these evolutionary imperfections are what distinguish us from machines. Sure all the well oiled, linearly advantageous actions could well fit into the “computer-istic” model. But, we’d be omitting data on the unrequited loves, and the occasional drives to surpass the limits of beneficial behaviors. Like how we wage wars that end up killing masses of our fellow species, or how we allow ourselves to follow a leader so far (say in a cult) that we become brain-washed, or how as a nation we will over-consume macronutrients to a point of medical catastrophe.

That is, our defects, our foils, our weakness, our failures to conform to the deterministic direction of survival, are what make us dare I say, fundamentally human.

Posted in BSJ Blog | 1 Comment

To know where we are with Geographic Information Systems is to understand who we are.

“A place is what it is because of its location. Where we are is who we are.”

Portuguese poet Fernando Pessoa did not take geography for granted. He understood that a place is a space with an identity. Throughout his work Pessoa created multiple personalities to write his poetry, so much so that his literary genius was only recognized after his death when it was discovered that he alone had been multiple of the country’s greatest poets.

All this musing is good, but what of it?

The same way Pessoa was many from a singular space, so is geography. Your experience on Earth is a lottery of geographic variables: the human development index of your place of birth can determine your overall quality of life, the zip code of your residence the schools you’re eligible to attend, and your latitude the type of weather you need to prepare for. Your geography is, in many ways, your future. Which leads to the topic at hand:

You’re permeated with spatial data.

Enters the Geographic Information Systems (GIS), a set of tools and methodologies to make sophisticated decisions based on geography. Geospatial data is becoming increasingly reliable, voluminous, and accessible; this is data that you’re both passively and actively creating. For example, the route you take to work may be a conscious effort in itself, but the energy consumption from transportation may not. The spatial data that you produce is multilayered and connected, and you actively use that information in your daily life. In other words, we intuitively use GIS to navigate our world.

What needs to be emphasized is the use of GIS technology and methodology to address the world’s greatest trials: rising seas, food waste, urban inequity, resource management etc. A greater awareness of the space we occupy can in itself produce innovative solutions. And better yet, GIS is an interdisciplinary tool. Virtually all knowledge is bound to a spatial and temporal clause. Understanding those conditions enable us to become better decision makers, with greater foresight than previous generations.

Already in 1854 geospatial analysis was utilized to solve public health issues. You may have heard of Dr. John Snow, the father of modern epidemiology, and of his work on the cholera epidemic in London. Dr. Snow surveyed the infected district and identified the sources of the outbreaks around a cluster of water pumps. Because he mapped his fieldwork data, he was able to identify a spatial pattern and implement a solution to curb the epidemic. While cholera and other waterborne diseases are still a threat in large parts of the world, GIS can better address those problems on much larger scales.

The world is changing because we wish to shape it to our image, and we must prepare for the unintended consequences of our visions.

The beauty about geography is that it’s a discipline with no boundaries. City planners use historic traffic data to make city more sustainable by redesigning traffic grids. Architects and engineers can cooperate to choose the most suitable location for buildings that have solar grids. Disaster managers are able to mitigate the worst of a bad situation before it happens by identifying areas out of proximity from emergency services. The list goes on to include climatologist analyzing multispectral images to gage the effects of CO2 on the Earth’s systems, archaeologists conducting a transect to locate and map an excavation area, and educators to teach their students about issues in their communities.

The Geographic Information Systems, and more broadly the Geographic Information Sciences, are essential to mapping the denouement of what the future holds for our planet. We live in paradoxical times where the distances between places are becoming narrower through communication and trade, yet further apart in terms of social equity and environmental degradation. Globalization, being the loaded word that it is, cannot be remotely understood before we understand where we are relatively to other places.

For a place is an identity. Understanding how it’s changing now will also help us understand who, or what, we are becoming.

Posted in Uncategorized | Tagged , , , , , , , , , , , , | 2 Comments