David Foster Wallace was right: Irony is ruining our culture
[Original source: www.salon.com
]David Foster Wallace long ago warned about the cultural snark that now defines popular culture. It's time to listen
By Matt Ashby
and Brendan Carroll
Percy Shelley famously wrote that “poets are the unacknowledged legislators of the world.” For Shelley, great art had the potential to make a new world through the depth of its vision and the properties of its creation. Today, Shelley would be laughed out of the room. Lazy cynicism has replaced thoughtful conviction as the mark of an educated worldview. Indeed, cynicism saturates popular culture, and it has afflicted contemporary art by way of postmodernism and irony. Perhaps no recent figure dealt with this problem more explicitly than David Foster Wallace. One of his central artistic projects remains a vital question for artists today: How does art progress from irony and cynicism to something sincere and redeeming?
Twenty years ago, Wallace wrote about the impact of television on U.S. fiction. He focused on the effects of irony as it transferred from one medium to the other. In the 1960s, writers like Thomas Pynchon had successfully used irony and pop reference to reveal the dark side of war and American culture. Irony laid waste to corruption and hypocrisy. In the aftermath of the ’60s, as Wallace saw it, television adopted a self-deprecating, ironic attitude to make viewers feel smarter than the naïve public, and to flatter them into continued watching. Fiction responded by simply absorbing pop culture to “help create a mood of irony and irreverence, to make us uneasy and so ‘comment’ on the vapidity of U.S. culture, and most important, these days, to be just plain realistic.” But what if irony leads to a sinkhole of relativism and disavowal? For Wallace, regurgitating ironic pop culture is a dead end:
Anyone with the heretical gall to ask an ironist what he actually stands for ends up looking like an hysteric or a prig. And herein lies the oppressiveness of institutionalized irony, the too-successful rebel: the ability to interdict the question without attending to its subject is, when exercised, tyranny. It [uses] the very tool that exposed its enemy to insulate itself.
So where have we gone from irony? Irony is now fashionable and a widely embraced default setting for social interaction, writing and the visual arts. Irony fosters an affected nihilistic attitude that is no more edgy than a syndicated episode of “Seinfeld.” Today, pop characters directly address the television-watching audience with a wink and nudge. (Shows like “30 Rock” deliver a kind of meta-television-irony irony; the protagonist is a writer for a show that satirizes television, and the character is played by a woman who actually used to write for a show that satirizes television. Each scene comes with an all-inclusive tongue-in-cheek.) And, of course, reality television as a concept is irony incarnate. ( Read more...Collapse )
( Read more...Collapse )
Repost from http://melissamaynase.tumblr.com/post/80864152800/vasundharaa-this-is-a-resource-post-for-all-the
This is a resource post for all the Good White Person™s out there. You know, the ones who say things like “It’s not my fault I’m white! Don’t generalize white people!”, or “I’m appreciating your culture! You should be proud!”, or “Why do you hate all white people, look I’m a special snowflake who’s not racist give me an award for meeting the minimum requirements for being a decent human being”.
Well, if you are actually interested in understanding racism and how it ties into cultural appropriation, please read instead of endlessly badgering PoCs on tumblr with your cliched, unoriginal arguments and repeating the same questions over and over.
Original source.IPCC: world is ill-prepared for risks from a changing climateby Liz Kalaugher
The world, in many cases, is ill-prepared for risks from a changing climate. So says the Intergovernmental Panel on Climate Change (IPCC), which today released its working group II (WGII) report, Climate Change 2014: Impacts, Adaptation, and Vulnerability.
"We live in an era of man-made climate change," said Vicente Barros, co-chair of working group II. "In many cases, we are not prepared for the climate-related risks that we already face. Investments in better preparation can pay dividends both for the present and for the future."
Although there are opportunities to respond, the risks will be difficult to manage with high levels of warming, according to the report. In that case, says WGII co-chair Chris Field, "even serious, sustained investments in adaptation will face limits".
What's more, the summary for policymakers says that "the precise levels of climate change sufficient to trigger tipping points remain uncertain, but the risk associated with crossing multiple tipping points in the Earth system or in interlinked human and natural systems increases with rising temperature".
The report details climate change impacts so far, such as changes in the quantity and quality of water resources, shifts in the range of animal and plant species, and altered crop yields, as well as the adaptation measures adopted to date.
"Climate-change adaptation is not an exotic agenda that has never been tried," said Chris Field, co-chair of working group II. "Governments, firms and communities around the world are building experience with adaptation. This experience forms a starting point for bolder, more ambitious adaptations that will be important as climate and society continue to change."Facing the risk
For the first time the WGII report includes a focus on risk, which it says supports decision-making in the context of climate change. "People and societies may perceive or rank risks and potential benefits differently, given diverse values and goals," states its summary for policymakers.
Many of the key risks are particular challenges for the least developed countries and vulnerable communities, given their limited ability to cope, concludes the document.
"Understanding that climate change is a challenge in managing risk opens a wide range of opportunities for integrating adaptation with economic and social development, and with initiatives to limit future warming," said Field. "We definitely face challenges, but understanding those challenges and tackling them creatively can make climate-change adaptation an important way to help build a more vibrant world in the near-term and beyond."
The economic impacts of climate change are hard to pin down. For an additional warming of 2°C, global annual economic losses have been estimated to be from 0.2 to 2% of income, according to the report, but are more likely than not to be larger than this range. And individual countries would see big differences in the losses sustained. The incremental economic impact of emitting carbon dioxide may lie between a few dollars and several hundred dollars per tonne of carbon, depending on the amount of damage and discount rate assumed.Regional variation
WGII has defined key risks, along with potential adaptation measures, for each region. For Africa these are stress on water resources, reduced crop productivity and climate-related changes in vector- and water-borne diseases. Europe, meanwhile, could see increased flooding in river basins and coasts, less water availability and more extreme heat events. In Asia the key risks are likely to be more riverine, coastal and urban flooding, greater risk of heat-related mortality, and more drought-related water and food shortage. Australasia could see problems for coral reefs, more frequent and intense flood damage, and increasing risks to coastal infrastructure and low-lying ecosystems.
North America is likely to suffer increased problems with wildfires, heat-related human mortality, and urban floods in riverine and coastal areas. Central and South America could see increased risks to human health, problems with water availability in some regions, and flooding and landslides due to extreme precipitation in others, and decreased food production and quality. At the poles there could be problems for ecosystems, risks for the health and wellbeing of Arctic residents, and challenges for northern communities. Small islands are likely to see threats to low-lying coastal areas as well as loss of livelihoods, infrastructure, ecosystem services and economic stability. And the oceans could well experience a shift in fish and invertebrate species, reduced biodiversity, lower fisheries abundance, less coastal protection by coral reefs, and coastal inundation and habitat loss.More publications
This fifth assessment WGII report relies on more published literature than its fourth assessment predecessor, which was released in 2007. The number of scientific publications covering climate-change impacts, adaptation and vulnerability more than doubled from 2005 to 2010, with an especially rapid rise in papers on adaptation, according to WGII.
A total of 309 coordinating lead authors, lead authors and review editors, from 70 countries, put together this fifth assessment report, with help from 436 contributing authors and 1729 expert and government reviewers.
• The IPCC's working group III report, on climate mitigation, is due to be released on 13 April.
Despite seeing all these different ways in which the genetic picture is far more complex than we initially thought, and far more complex than the genetic picture on which the traditional evolutionary models rely, if you sift through the scientific literature, you will find plenty of findings of the heritability
of such and such trait, that some trait is heritable
to some specified degree.
Now, the importance of heritability to this particular discussion has to do with the fact that the traditional evolutionary models heavily depend on the assumption that traits are entirely heritable because traits are entirely a result of genes. (Once again, the traditional models teach us that anything even remotely close to Lamarckian evolution is flat out wrong.) So, one would think that all the scientific findings that purport to show the heritability of such and such trait would give support and favour to that assumption.
The problem is, the words 'heritable' and 'heritability' don't mean what you think they mean. And they don't mean what the traditional evolutionary models would need them to mean. At the end of this entry, I go into more detail regarding the theoretical consequences of what I discuss here on the traditional evolutionary models. My apologies that the perhaps most important point of this entire entry is all the way at the very end, but there is no way to fully make the point, to allow my dear reader to fully understand it, without first going through a few things.
So the notion of heritability
requires some unpacking here, because the term is part of the technical vocabulary
of the sciences of genetics, which means that when scientists use the term, they have a very specific meaning in mind, and it's not the layperson's meaning. The technical meaning of 'heritability' is intimately tied up with scientific practice and theory of scientific practice. To really understand what scientists mean when they use the term, we have to understand how it is that they go about their experiments and research when they are attempting to measure the heritability of some trait.
I say that the meaning is tied up, not just with scientific practice, but with theory of scientific practice. What I mean by that has to do with the fact that, at least in theory, ideally, scientists are being extremely careful
about what they can and can't measure, and sometimes the difference is extremely subtle for those of us who aren't scientists. We throw around the terms 'genetic' and 'heritable' interchangeably and very loosely. But what do we think we really mean by them? And more importantly, is what we think we mean even scientifically credible, in the sense of being something that scientists could even demonstrate at all?
Obviously, then, the crucial point here is to understand what
heritability is actually a measurement of
, since that
is what is actually meant
when scientists talk about the heritability of some trait, and thus, that is what scientists claim to have evidence for. But furthermore, a scientist never
means what the layperson means, or what the layperson thinks he means, or what the layperson wants the scientist to mean.
a measurement of is how much genes determine the average of a given trait in a population. For example, the average number of fingers on each hand, or the average distance between the eyes, or the average height, or the average level of intelligence, or the average level of aggression, and so on.
Instead, what heritability is
is a measurement of is how much genes have to do with the degree of variability
of a trait. ( Read more...Collapse )
Finally got around to watching the documentary The Fog Of War, an extended interview with McNamara. Incredibly interesting. But I want only to note here right now that Morris ends the documentary with a phone(?) conversation in which he asks McNamara a few further questions about Vietnam, which McNamara declines to discuss any further, for reasons he can't really disclose, because of things he knows that we don't and because of how what he might say would be taken by particular but unnamed others. But it's his very last sentence that is so deeply telling. And in a most troubling way. Morris asks whether it's a problem of, "Damned if you do, and damned if you don't," and McNamara says that's exactly what it is. And, "I'd rather be damned if I don't."
This is just a repost. Original SourceThe cost of stereotypesBy Margaret Harris
When a 2012 study
showed that scientists subconsciously favour male students over females when assessing their employability as early-career researchers, it generated plenty of debate
– not least among women, who were, according to the study, just as likely to be biased as the men were.
Some of these discussions got rather overheated, but one cogent criticism of the study did emerge. Roughly, it was this: might the scientists’ preference for men over equally well-qualified women be a rational response to the fact that, because of various barriers, women in science often need to be better than their male counterparts in order to have an equal chance of success?
The question was an awkward one, since it implied that women in science could be caught in a vicious circle, with the negative effects of bias in the workplace making it “rational” to be biased in hiring (and, in turn, making such workplace bias more likely to persist). However, a new study appears to rule out this argument by finding similar patterns of hiring bias against women even when the “job” is an arithmetical task that, on average, women and men perform equally well.
In their study
, Ernesto Reuben, Paola Sapienza and Luigi Zingales examined the effects of gender stereotypes in an artificial market where male and female “employers” were presented with male–female pairs of “candidates” and asked who they would hire to complete an arithmetical task. When employers had no information about the candidates other than their appearance, they chose the man 66% of the time (out of 507 male–female pairs). When, in a second experiment, employers also heard the candidates’ self-reported performance on a previous, similar arithmetical task, they still picked the man 66% of the time – even though in around half of the 160 male–female pairs, the woman had outscored the man. When the researchers themselves informed employers about candidates’ past performance, the bias was smaller, but employers still hired the man 57% of the time (out of 265 pairs).
The researchers also showed that in some circumstances, employers’ biased hiring decisions correlated with their pre-existing negative views of women and mathematics, as measured by an Implicit Association Test
(IAT). Male and female employers who held stereotypically negative views about women’s mathematical abilities (IAT scores > 0) were more likely to predict that male candidates would outperform females on the second task if they were given (a) no information or (b) only self-reported information about the candidates’ past performance.
This correlation vanished when past-performance information came from the researchers themselves, suggesting that “stereotypes did not seem to affect [employers’ decisions] when the information was provided by a neutral third party”. However, even with good, neutral information, employers still chose female candidates less often than male ones: in one sub-experiment involving 265 hiring decisions, employers chose a lower-performing male over a higher-performing female a whopping 82.7% of the time.
Based on these results, the researchers concluded that “stereotypes do indeed affect the demand for women in mathematics-related tasks, regardless of quality considerations”.
Picking up where I left off:
The traditional evolutionary model of heritable traits in the context of genetic determinism disallows any form of non-genetic (non-Mendelian) inheritance of a trait. More specifically, it disallows the heritability of epigenetic
changes; or in other words, it disallows epigenetic inheritance.
So, this issue actually came up in that conversation I'd had last year that sparked me to think about all of this and learn some things about it. And what that person said to me was that, we have no evidence whatsoever that epigenetic changes can be passed on, i.e., that epigenetic changes are heritable.That is completely false.
In fact, we've known for some time now that epigenetic changes can be passed on.
(Note: that doesn't mean all
epigenetic changes are passed on. At least some of them can
be passed on. No one would ever claim that all of them can be.)
What this does, of course, is complicate the evolutionary picture a whole lot more. Because we now have lots of different things going on simultaneously that will, in a variety of different ways, affect the evolutionary picture that comes out at the end.
An intertwining theme that will come up over and over again is just how much environment
can affect what goes on.
And then there is the crazy world of something that isn't supposed to be possible: an organism changing its own DNA.
Some examples of heritable epigenetic changes:
One significant example comes from the Dutch Hongerwinter ("Hunger winter").
In the winter of 1944-45, Holland suffered severe starvation for about three months because the Nazis, who had control of Holland at that time, sent all of the food in Holland to Germany, both because the Germans needed food and because they wanted to punish the Dutch. Scientists found that in those individuals who were third trimester fetuses at the time of the Dutch Hongerwinter – those who survived, obviously – there were much higher than average rates of obesity, diabetes, and cardiovascular problems. Such was not seen in those were newborns during the time, nor in those who were first trimester fetuses during the time. This basically led to the discovery that during the latter part of the pregnancy, the fetus is, in a sense, programming its metabolism based on the information it is receiving via nutrients in the mother's blood at that time, and its metabolism is that way for the rest of the person's life. Additionally, as expected, those who were third trimester fetuses during the Hongerwinter were smaller than average when born.
Furthermore! The children of those individuals show similar features – small than average at birth, higher rates of obesity, diabetes, etc. – demonstrating that these epigenetic effects have been passed on
As I've mentioned, there are lab rat lines that have been bred for certain traits. Again, if they've been bred for these traits, then we can infer that these traits must be genetic.
There is a line of rats that have been bred for high anxiety, high stress; overall, their brains tend to be slightly smaller than average, but particularly in a certain brain region that has to do with turning off the stress response. So, baseline stress levels of those rats are much higher than average, and they are exposed to more stress overall because their brains are less effective at turning off the stress response and recovering from stress.
One scientist did an amazing experiment with these rats. She figured out how to perform surgery to transfer fetuses from one rat to another rat and have the fetuses develop normally. Then, she transferred fetuses from high stress rats to low stress rats, so that fetuses from the high stress rat line developed in low stress rats. The result was that those fetuses, after being born, grew up to be low stress, not high stress. In other words, she discovered that the trait of being high stress was not
a genetic trait at all, but an epigenetic trait
for which the epigenetic change occurs during prenatal development! Specifically, the epigenetic change in the fetus is caused by exposure to glucocorticoids (stress hormones) from the mother. ( Read more...Collapse )
Of the three assumptions that Darwinian traditional evolutionary models rely on, I've only addressed two, adaptationism (the adaptationist fallacy) and gradualism. The other assumption is that traits, behaviors included, are heritable
For the traditional evolutionary models, heritability is entirely about genes
, about DNA. These models are explicitly against anything like Lamarckian evolution. Lamarck argued that evolutionary change is a product of things like the efforts of organisms, or experiences of organisms in general. To give an extremely simplistic example: the giraffe got its long neck through generations of giraffes stretching their necks to reach for tree branches to eat from, each generation or so ending up with a slightly longer neck than the previous one. Of course, everyone knows Lamarck was wrong and evolution doesn't happen that way.
The traditional evolutionary models offer the explanation that some behavioral trait was selected for because it was somehow adaptive. So for example, in tournament primate species, it's not uncommon to see adult males killing infants, and since this is a widespread behavior, the traditional models have to claim that that behavior exists in all of those species because it was selected for because it was adaptive in some way. Since the males will only kill infants who are the offspring of another male, who is not a relative, the explanation is, of course, individual selection and kin selection: since they all must be driven to pass on as many copies of their genes as possible, and
to try to pass on more
copies than the next guy, unless he's a close relative, because there is competition everywhere in everything, killing someone else's kids decreases the number of copies of genes that other guy passes on, giving you the opportunity to pass on your genes in place of his. Additionally, a male won't kill some other guy's kids if it so happens that that other male is a close relative. And that behavioral trait, too, the traditional model must claim has been selected for. Obviously.
But think about that very carefully: if you make that claim, then you are assuming
that a behavior like killing some other guy's kids is heritable
, and thus is genetic
. And again, that a behavior like sparing the kids of some other guy if he is a close relative is a heritable trait
, and thus, is genetic
. That is a huge assumption. Especially when you look at how genes and DNA actually work: these are very complex behaviors we are talking about here, so the route from genes to such behaviors would also have to be fairly complex and composed of several steps.
But there is a problem here. The practice of killing infants by male primates of tournament species is not a simple given: it only occurs in certain types of circumstances, and those circumstances are at least as sociological as they are biological. So now, if you insist this is a genetically determined trait, because it's a behavior that has been evolutionarily selected for, and thus is heritable, you have to find some way of making genetic sense of the differences between circumstances in which a male kills an infant and those in which he doesn't. And one of the several factors all of this depends on is the hierarchical rank of the male. Good luck.
Or consider that the usage of tools has been observed in many primate species, with variability in the sophistication of the tools themselves and the method of use. For example: using rocks as hammers, to break open various food sources, such as nuts and oysters. Such a behavior is so widespread, the traditional model would have to claim it is an adaptive trait that was selected for. But since only heritable traits can be selected for, the traditional model has to say that using rocks for breaking things open is a heritable trait, and therefore must be a genetic trait. But does that really make sense? I mean, apply that to humans: the use of bowls to hold food is an extremely widespread behavior amongst humans, so it must have been selected for, which must make it a genetic trait.
Really? Suddenly that starts to sound completely crazy. Or incredibly arbitrary. And you obviously can't attempt to get out of it by trying to claim that humans are somehow different from all other animals so that we're somehow exempt from the same rules of evolutionary analysis, because if there is anything that evolutionary theories since Darwin show us, and insist upon, it is that we are not
different and that we are just as much a part of the animal kingdom as is a moose or a mouse.
You might just think, but what's wrong with the idea that these traits are passed down from one generation to the next by teaching / learning? In other words, the idea that such traits are in some sense cultural
The problem is that the traditional evolutionary models can't really make sense of that. If a behavior is strictly
a product of learning and thus, not inherited, it cannot be understood as being a product of evolution, which means it cannot be understood as being evolutionarily advantageous. You can see how limited the traditional models are here: if it's not genetic, then it cannot really be a product of evolution.
Why? Why does the traditional model require a trait to be genetic in order for it to be evolutionarily passed on? A few different reasons. Because evolution is all about genes: natural selection, remember, isn't about survival of the fittest, but passing on more copies of one's genes. It's what's in the genes that's important. Furthermore, because the traditional models have evolution moving so slowly – gradualism – what is needed for a trait to eventually spread throughout a population is some way for it "to stick" so that it is not lost at any point. Because, as we all know too well, it is incredibly easy for things learned to be lost in the following generations; that fact makes it nearly impossible to imagine strictly learned traits to survive long enough to gain the title of being evolutionarily advantageous. Having the traits written into the genetic material itself is basically the only way the traditional models have for explaining how traits are inherited and spread through populations, or how they disappear if they are disadvantageous, which is equally important to the traditional evolutionary models. Fo again, how can something really be selected against
in the way that the traditional models mean that if the trait is not tied to the genes?
Another reason the traditional models require a trait to be genetic in order to be evolutionarily passed on is that the alternative creeps over into Lamarckian evolution.
Here is a wonderfully interesting example to think about. ( Read more...Collapse )
You know, I'm getting really fucking sick and tired of losing so much of my life to being sick and tired because in one way or another my body is the enemy I have to fight with every single day. If it's not one thing, it's another, and my body simply refuses to respond to any medications for whatever is ailing me on any given day.
What's the point in living if 99% of the time you feel like shit and can't get a damn thing done?
I should mention: this is not a joke: it is an actual recruitment video for the Gabon military.
Very worth listening to this talk. This guy has some very interesting things to say, and has a very interesting perspective on health and being human.
- Tags:adhd, aspergers, biology, depression, health, human relations, mental disorders, mental illness, neuroscience, pain, phil of biology, stressed
- Music:Pink Floyd
The best evidence / demonstration of the presence of male bias in the sciences because of their being, only until very recently, male dominated:
The suspicious lack of studies of homosexual women amongst all the studies of homosexuality in humans.
Apparently the scientific interest in homosexuality in humans = an interest in homosexual men. (With a rare few exceptions to that '='.)
(Apologies for taking awhile to get these posted. I've lately been feeling not so good, interspersed with obsessively working on knitting a sweater that I've been trying to finish, but keep taking apart to change this or that.)
One thing I want to point out and emphasize – something I've already mentioned, but want to say again – is that behaviors
are just as much traits as physiological characteristics are, and the traditional evolutionary models depend on the idea that behaviors are shaped in exactly the same way as any physiological characteristic is, and thus, that behaviors are connected to genes in exactly the same way as any physiological characteristic is. So, for example, the explanation as to why male baboons fight with each other all the time is the same explanation as to why their olfactory sensory system is set up the way it is, and why they have canines the size that they do, and why their overall bone structure is what it is: on the average, it increases how many copies of genes they get to pass on. (Of course, then there's that problem I discussed previously about females actually choosing
to mate with the less aggressive males, and thus explicitly tricking the more aggressive, higher-ranked males in order to mate with those less aggressive, lower-ranked males that are actually nice to the females.)
With that in mind, here is a fascinating finding to think about:
Amongst voles, some types are monogamous and other types are polygamous. What has been found is that a promoter upstream of a gene having to do with the hormone vasopressin – a gene for a receptor – comes in two different versions, and the monogamous male voles have one version, while the polygamous male voles have the other version. The gene itself, however, is the same in both. A difference in the promoter means differences in the context in which the gene is expressed, such as what transcription factor affects the gene, where in the body (in the brain) the gene is expressed, and so on. So a mere difference in the promoter of a gene, but not a difference in the gene itself, gives rise to a radical difference in behaviors. For a species to be monogamous means that that species is a pair-bonding species; and for one to be polygamous is for that species to be a tournament species. And as I've already discussed, there are very significant differences between pair-bonding and tournament species. So, a single difference in a regulatory factor for one gene can play a critical role in a very big set of differences at the macro level.
However! Things quickly get complicated and murky once we look at what is going on in one species in particular. Normally in pair-bonding species, in order for the pair-bond to occur, during and immediately after mating the right things have to be going on with certain hormones and neurotransmitters, levels of some going up at the right times while the levels of others decrease and so on. That difference in the promoter for the vasopressin receptor gene is playing a role here. As you would expect, the specifics of what goes on is different in tournament species. This is a consistent difference. And then we look at bonobos. In bonobos we find exactly the same things going on with hormones and neurotransmitters that go on in pair-bonding (i.e., monogamous) species. But bonobos are as far as you can possibly get from monogamous than any other species on the planet. And yet they are also definitely not a tournament species, given that males and females do not differ in body size, that bonobos are matriarchal, and so on.
What should one take from this? First, the connection between behavioral traits and genes is more complex, since it's not always about the gene itself, but can have something to do with the context
in which the gene is expressed. But second, things still aren't even that clean, since we find anomalous cases.
Recall the different lab rat lines that have been bred for certain features. At least one of the most aggressive lines was found to actually differ in pain sensitivity, so that they were much more sensitive to pain than other rats. Being a lot more aggressive is, if you think about it, a pretty significant difference in behavior, since it's not just about a single behavior or a type of behavior, but about all
behavior. If the difference has to do with how sensitive one is to pain, then that is a difference that can be brought about by mutation in a single gene, or in some regulatory aspect for that gene: mutate the gene for the µ1
-opioid receptor, which is the favoured receptor of β-endorphin, making it non-functional, and that individual has lost the body's best natural pain-killing mechanism, and will thus end up far more sensitive to pain.
So again, much like the examples I gave in the previous entry, it is extremely easy to get significant macro-level changes by mere micro-mutations in DNA.
Now, continuing on as to how it is that the gradualism of the traditional evolutionary models falls apart:
There is a second way in which duplications of genes make gradualism implausible: they can actually allow for much faster evolution
. An extra copy of a gene for a receptor of a hormone, for example, might get mutated enough to code for a receptor with a different shape, but a shape into which nothing fits. Since there's another copy still coding for the right receptor, the mutation has no effect. Then, ten thousand years later, some new mutation codes for a protein that just happens to have the right shape to fit into and bind with that receptor. And suddenly a dramatic change results. In other words, in a metaphorical sense, extra copies of genes and regulatory factors allows the genome to experiment with different mutations without having to change whatever roles those genes and regulatory factors have, without having to lose whatever it is they do; then eventually, some of those "experimental" mutations compliment each other, and suddenly there are significant macro changes. What this would mean is that the "in between stages" we envision occurring in traits, i.e., at the macro level, in a gradualist picture of evolution wouldn't actually be occurring; instead, those "in between stages" would be occurring at the genetic level only, because the mutated genes and regulatory factors are ones that the body doesn't have any way of expressing yet, and since the mutations are occurring in duplicated parts of DNA, nothing is lost, and thus, nothing at the macro-level changes. And then the right mutation occurs that can then "turn on" all those mutations.
This is not simply a theoretical possibility for duplications. Scientists studying the evolution of specific genes, or specific proteins in the body, have determined that some particular gene initially started out as a duplicate copy of another gene that is still present in the genome and still functional, and now that mutated duplicate codes for a functionally different protein.
On the one hand, we could say this is still gradualism, since the gradual changes are occurring at the genetic level. But that sort of gradualism is an entirely different kind of gradualism from that of the traditional evolutionary models, because, most importantly, it could not
save the competition that permeates the traditional models, the competition that is supposed to be driving a lot of the evolution going on. Since such gradual genetic mutations would be having no macro-effects, they could not increase or decrease an individual organism's chances for passing on copies of its genes; in that sense, such mutations would be entirely neutral with respect to evolutionary "advantage" or "disadvantage". But for the traditional models, it is crucial that the "in between stages" are specifically incrementally evolutionarily "advantageous".
Consequently, this function of duplicated genes also explains how traits can arise without being adaptive. ( Read more...Collapse )
Picking up where I left off: how do the traditional evolutionary models fall apart in the face of our current understanding of the molecular-genetic world?
The bottom line is this:
The traditional evolutionary models all rely on a few crucial assumptions, and one of those assumptions is that of gradualism
, that is, the evolutionary process occurs very gradually, very slowly, through tiny bits of random changes in this or that trait amongst members of a species. However, gradualism is inconsistent with what is actually going on in the molecular-genetic world. Part of understanding this is, once again, understanding why it's not right to think of genes as coding for traits, because what genes really code for are proteins
, which have all sorts of functions in the body. The rest of understanding how gradualism is inconsistent with molecular genetics is understanding the details of what is happening at the molecular level, which I went into in the previous entry, and understanding how those details fit together as a whole picture of what is going on in the molecular genetic world, and what the molecular and biological consequences are.
Of course, it's important to be considering what would have to be going on in order for gradualism to be true in order to then see how that's not what is happening. So if gradualism is true, then what we should expect to be happening over the course of evolution are micro-mutations
in the DNA bringing about very slight macro-changes in some trait. Then, if that slight change gives the organism even the tiniest bit of advantage over other members of its species in passing on copies of its genes, then, given a long enough stretch of time, that slight macro-change will have been selected for and eventually all the members of the species will have inherited that micro-mutation in the DNA. By that point, it would no longer be considered a mutation
, but just a "normal" part of the genome of that species.
Explaining how that's not
what is actually going on requires going through some explanatory steps and going into a little more detail about some of the specifics. So bear with me, as there are a lot of pieces to the picture. (As I've already said, things are a whole lot more complicated than you think.)
is understood here to be a point mutation
, of which there are three basic types: replacement, insertion, and deletion. A replacement mutation
is when a single nucleotide is changed to a different nucleotide, having been miscopied or altered by radiation, etc. An insertion mutation
is when a single nucleotide is accidentally repeated when the DNA is copied, and thus causes a frameshift in the opposite direction the DNA is read. A deletion mutation
is when a single nucleotide is accidentally left out when the DNA is copied, and thus causes a frameshift in the same direction the DNA is read.
A replacement may have no effect if the resulting codon it is part of codes for the same amino acid as the un-mutated version. Or, it may have only a slight effect if the resulting codon codes for an amino acid that has very similar molecular properties to the amino acid coded for in the un-mutated version. Here is a linguistic example to help explain and demonstrate this – I didn't personally come up with this example, I'm stealing it from someone else.
(1) 'I will now do this.'
(2) 'I will mow do this.'
We wouldn't have any difficulty understanding what that second sentence is supposed to mean. So in this case, the replacement doesn't really have an effect, because the original message is still readable despite the mutation.
However, sometimes a replacement can have drastic effects, if the resulting codon codes for a different amino acid with entirely different properties from the amino acid coded for in the un-mutated version.
(1) 'I will now do this.'
(3) 'I will not do this.'
Obviously, the third sentence means something entirely different from the first; so, in this replacement mutation, the message has drastically changed, with potentially devastating effects.
(1) 'I will now do this.'
(4) 'I will noo wd othi.'
Since codons have three, and only three, nucleotides, when one nucleotide gets accidentally repeated, it causes what is called a frameshift, which is what is represented as going on in (4). This insertion mutation has now turned the sentence into gibberish.
(1) 'I will now do this.'
(5) 'I will nod ot his.'
Once again, because codons must have three nucleotides, a deletion mutation also causes a frameshift, in the other direction, which again has here turned the sentence into gibberish.
So, point mutations can have no effect, thanks to the redundancy in the genetic coding of amino acids, or a very slight effect that doesn't really cause any macro-changes – that is, macro-changes that affect the organism's survival – or a significant and possibly devastating effect. ( Read more...Collapse )
The question one might be wondering at this point is, if the traditional models of evolution fail to describe and explain what has actually been going on, then, what's the alternative?
The way of getting at some kind of answer to that is by looking in the one place that we think carries
the effects of evolution: genes
Our theory of the nature of DNA and genes and how they function has changed drastically from what it was 40 years ago. DNA and genes have turned out to be very different from what we initially expected. At this moment, there is still a whole lot
that we don't know and don't understand, so the field of genetics is not without some significant debates and disagreements between scientists. That has to underlie any discussion about these topics.
That said, there are a lot of things scientists are
in agreement about. There are also a lot of findings and a lot of data; and while those are, in and of themselves, fairly clear, how we ought to interpret
them is sometimes clear and sometimes not.
Conversationally, as non-scientists, we often talk about genes for
this or that trait, such as having genes for brown or blue eyes, for being tall or short, having brown or blond hair, for being muscular or thin, whatever, just to name a few traits relevant to humans. The problem is, that's incorrect. That sort of understanding of genes comes from the beginnings of Darwinian evolutionary theory and Mendelian genetics.
For Darwinian evolution, for example:
You might be looking at some species of bird and notice that, between two individuals (of the same sex), one has a slightly larger wingspan than the other, despite that the rest of their bodies are fairly equal. You might then conclude that, perhaps, the one with the slightly larger wingspan has some slight mutation in the gene that has to do with their wingspan, and that's probably advantageous, so, if you came back many, many, many generations later, then most or all members of the species would have that slightly larger wingspan. On the other hand, if it was disadvantageous – maybe it's too clumsy – then, many generations later, none of the members of that species would have that large of a wingspan. And, that's how evolution via genes works.
This is the way we were all taught to think of genes in middle or high school, which means that, for most people, it's hard to not
think of genes in that way.
Genes don't code for traits. Period.
What genes do
code for are proteins
. In other words, genes code for microscopic molecules. So, it's in terms of microscopic molecules that you have to think about all of this. Now, I don't know about you, but I'll be honest: it's easy to say
that, but to actually think
in those terms is not at all easy. It's like the difference between having to translate a foreign language in order to read something or have a conversation, and being able to think in that foreign language
so that no translation whatsoever is occurring in your mind.
Why am I blathering on about this? Because as long as you slip even the slightest bit back into that incorrect picture of genes that we were all taught as kids, you put yourself in danger of misunderstanding, and it's a kind of misunderstanding that can easily spread, so that one little misunderstanding at some point has the potential for a sort of ripple effect. It's not that all of this is actually difficult to understand, because it's not. It's more a matter of breaking a habit. ( Read more...Collapse )
An interesting criticism of the traditional evolutionary models came from Soviet scientists. Their criticism was directed at the claim that the driving force behind evolution was competition between individuals
, individual selection, just trying to pass on more copies of your own genes than the next guy (who's not closely related to you). What the Soviets emphasized plays a critical role in evolution is climate
, and being able to survive extreme climate conditions. But what they observed was that, in such conditions, you didn't find the competition that these Western white upper class male scientists were haranguing about. According to their traditional models, what was supposed to be the case in extreme conditions was that, being under greater pressures, competition between individuals, individual selection, should increase. Scarcity of resources, for example, we've all been taught is supposed to cause increased competition. The Soviet scientists observed that the opposite is actually true.
And in fact, further observations confirm this. In times of decreased food supply, not decreased so much that animals are being calorically deprived, but just during times when they have to work a lot harder to get food, the traditional models predict that competitive and aggressive behavior should increase. What in fact has been observed is that during those times, aggressive behavior decreases
! And this is something that has been observed in lions! Furthermore, in times of excess, when food sources are abundant and animals don't have to work much to get their food, aggressive behavior increases! Again, that's the complete opposite of what the traditional models would have you believe. This increase in aggressive behavior during times of food excess is called 'behavioral fat'.
This reminds me of something I observed in my ducks on Waldron, that I'm pretty sure I wrote about here. When all of the females were spending their time sitting on their nests, this left the males on their own 95% of the time. Since the males don't eat nearly as much as the females, because they're not producing eggs, they only spend a small percentage of their time eating. What is there left to do? Well, what are they normally doing all day long? Protecting the females
, or doing this or that for the females. So, without the females around, they've got nothing to do. Which means, basically, they're bored
. Now, when I was observing this, there were still two groups of the ducks, the older bunch and the younger bunch, and at the time I was observing this, there were two older drakes and three younger ones. So what did they spend most of their time doing? Messing with each other! Starting shit with each other! One or two of the younger guys would go over to where the older ones were hanging out and just start some unnecessary "fighting" – it was hardly fighting, not at all like what they would do when they were trying to impress the females. From what I could tell, more often than not, it was the younger drakes who'd go over and mess with the older guys, and occasionally it was the other way around.
Another major thing that the traditional models got very wrong is sex
. ( Read more...Collapse )
One of the ways of giving evidence in favour of kin selection is by demonstrating the fact that most species have ways of detecting relatedness, many of them involving chemical signatures detected in pheromones, but plenty of them involving many other forms of recognition.
Just to give a few examples:
Some research being done with a particular species of monkeys included making recordings of several of the members of the group making different types of calls and voice gestures and whatnot. A speaker was hidden in a bush and the scientists played a recording of one of the very young children making an alarm call. The immediate reaction of all of the adults (minus one) was that they looked at the mother of the infant whose voice was on the recording, demonstrating just how well they can detect relatedness.
A type of scenario that has been observed in primates: one female (A) does something mean to another female (B), and then later in the day, the child of female B does something mean to the child of female A. This requires the children to understand the relatedness between the generations.
I think this next example probably applies to a few different rodent species. Male hamsters are migratory, meaning that, once they mate, they don't stick around. That also means that, usually, when a male comes upon a female with a litter of kids that aren't his own offspring, he will kill and eat them. Unless
they are the offspring of a close relative of his, such as a brother or a close cousin, the relatedness being detected via pheromones.
So one way of arguing for kin selection is to say that these mechanisms that allow individuals to detect others who are closely related is for the purpose of
helping to pass on copies of one's own genes, since close relatives have some degree of genes in common. This helps pass on copies of one's own genes because when an individual detects that someone is a close relative, that detection causes the individual to cooperate
with that other instead of attacking him or her, such as in the case of the male hamster detecting that those kids were fathered by his brother (or other close relative). And this idea of cooperation between kin is also used to explain away any behavior that appears to be "altruistic".
As a theory, is kin selection true or false?
What I want to say is that that's a nonsense question. ( Read more...Collapse )
To start with, I'll relay some examples of ways in which we found out we were wrong about something, or of discovering / realizing something that calls into question conclusions of previous research and findings.
[Just a disclaimer: These are all examples I've gotten secondhand, though I don't see any good reason to question the reliability of the source; however, that certainly doesn't mean this source isn't mistaken about any of these. So, being that I'm not even close to being an expert in this branch of sciences, I can't guarantee that everything here is 100% correct and I can't give you any reason to think that it isn't. I'm just relaying information I got secondhand.]
It had previously been observed that the social rank of a chicken was inherited. I'm not sure if this just had to do with females, and that the social rank was inherited from the mother, or if this was true of both males and females, but it doesn't matter. So, this seemed like a pretty clear instance of the heritability of a social behavioral trait. However, what was later found – I don't know how much later, and I don't know when – is that, what is actually being inherited has something to do with the melanism of the feathers such that it causes the other chickens to peck at the individual a lot, which thereby reduces the individual into submission. So, it wasn't that some social behavioral trait was being inherited, but a visibly perceptible anatomical trait that significantly influenced how other members of the species treated that individual.
It had previously been thought that chicks inherited an instinctive behavior of picking at grubs and such. However, someone eventually figured out that what was actually being inherited was the tendency for the chick to pick at its own toes, thus discovering quite by accident that there were yummy things it could find down there, too. Not sure how it was done, but I guess the scientists somehow covered chicks' feet, and so they didn't display this supposedly inherited instinct to pick at bugs in the ground.
There was an experiment in which scientists stimulated a certain part of a rat's brain and the rat immediately attacked and tore into and killed a mouse (or maybe it was another rat?) that was nearby. The scientists of course would have done this more than just once, and it was the same each time, so they concluded that this part of the brain definitely had something to do with aggression and aggressive behavior. I think that part of the brain was in the hypothalamus, and thus, lots of research and studies and experiments went into studying the hypothalamus under the assumption that it controlled aggression. Eventually someone discovered that it had nothing whatsoever
to do with aggression, but that it had to do with hunger. In the original rat experiment, they misinterpreted the behavior as aggressive behavior when it was actually predatory
behavior driven by hunger. It would be akin to, if you had that part of your brain stimulated, running into the kitchen and literally ripping open a box of cereal or cookies or something because you'd be so overtaken by ravenous hunger.
There was a similar misinterpretation of rat behavior after stimulating a certain part of the brain, and I think that misinterpretation was also that it was aggressive behavior. The behavior was that the rat immediately started shredding everything in the enclosure. As it turned out, that part of the brain actually had something to do with mothering
behavior, and it stimulated the rat to start building a nest. I believe they didn't figure this out until they stimulated that same part of the brain in a monkey, perhaps, and she did something I can't now remember that was very obviously
a mothering behavior.
Somewhat relatedly: it was believed for a very long time that female rat mating behavior was entirely passive, that they basically did nothing, so that only the males played an active role when it came to mating. I believe the scientist who discovered that this is wrong is (was?) a woman
. What was eventually found out is that the passivity of the females was due entirely to their being in small enclosures in labs: if they were put into much larger enclosures and thus given lots of space – or if they were observed in a natural, i.e. non-lab, habitat – they did indeed play an active role, running around and engaging in some kind of courtship ritual, or something like that. (Something that apparently required a lot of space… not sure about the details.)
So, the significance of these examples demonstrates how easy it is to misinterpret and misunderstand observed behavior, thus drawing incorrect and misguided conclusions. And this is what ethology
is all about trying to prevent, ethology being a discipline that tries to study and understand other species "in their own language
", so to speak. It was, for example, an ethologist who first observed bee dancing for what it is, and then figured out the content of the information being communicated, how to translate it. Ethologists pushed the idea that you can't possibly understand another species unless you are observing it in its natural habitat instead of in a laboratory. Now, to us that sounds obvious, but you have to realize that, for along time, to the majority of scientists studying other species, that sounded ridiculous, unnecessary. Look at the history of zoos: look at how long it took people to figure out that keeping animals in entirely cement enclosures was a bad idea. For a long time people believed it made no difference to the animals and certainly no difference on their behavior what the habitat was like.
The significance of these examples is also to show how much what is observed is driven by what scientists already believe. So, here are some more examples. ( Read more...Collapse )
"Evolution is not an inventor. Evolution is a tinkerer."
Picking up where I left off…
The assumption of adaptationism. It is in the context of this assumption that the majority of evolutionary theories attempt to explain some trait or other in some species or other. Natural selection, kin selection, sociobiology, evolutionary psychology, sexual selection, individual selection, and so on. The basic idea behind this assumption is that, if something has come out of evolution – and everything
we can possibly observe that currently exists in some organism or other is something that has come out of
evolution – it must have been because it was adaptive in some way, because it has some function that increases the fitness of the organism or of the species.
This is actually quite easily taken down in two ways. First, we can point out how ridiculous it is by then asking what the adaptive advantage is of any random feature, such as, for example, the human chin. For what did the human chin adapt? Compare the face of any other primate to that of a human and one of the distinctively human features that sticks out is our chin. Why was a prominent chin adaptive? ( Read more...Collapse )
Sometime last year I got into a conversation-turned-argument about evolution – not
about whether evolution occurs, has occurred, but about details of it, the mechanisms of it, some of the outcomes, etc. At some point in the conversation, the other person asked me: Do you actually know the science
about this stuff? I had no problem admitting that my background knowledge of the biological sciences having anything to do with evolution was fairly minimal. But what I was arguing for was less of a way things really are and more of a philosophically skeptical stance on claims of, this is how things really are and it's certain because science proves it.
So I made a point of trying to learn and find out more about the biological specifics of some of that stuff. And I have to say: It's good to know that I was right.
Not because I just care about being right, but because there are all sorts of ethical and political aspects and consequences to all of those scientific claims, and all of that matters to me just as much as the science itself does. Not that I would ever argue that ethical or political views should determine what the scientific theories claim. With this particular branch of sciences, having to do with evolution, especially of humans, understanding human behavior, and so on, it gets rather complicated, because the ethical and political aspects of human life enter into it in two ways: (i) the ethical and political claims and views that would follow as consequences of the scientific theory (or theories), and (ii) explaining and having to account for the ethical and political aspects of actual human life, the ethical and political arenas that actually exist amongst humans all over the world.
But, going back to that conversation:
I wasn't trying to say that any of the scientific claims were just flat out false
, especially because I knew I didn't know enough of the sciences to make such judgments. My reasoning was based more on my background in philosophy of sciences and knowing about all of the philosophically problematic aspects and challenges of the sciences, knowing some of the history of the sciences, more recent trends in the progression of sciences, the increasingly problematic arena of pop science and pop science media, and so on. But my reasoning was also based on something I'm not sure how to name, but can only attempt to explain as follows. I've spent so much of life observing
and paying attention
to so many things, such a variety of things, analyzing them, trying to understand them, trying to understand how they, as pieces, fit together. And based on all of that, just thinking through
all of that, it just brings me to a cognitive state, not of a specific thought, but of something that maybe is more easily described as a feeling that what was being claimed, the theory (or theories) involved, the evidence relied on and the interpretations of that evidence to offer support, that all of that being offered as the answer
, or set of answers, to a particular question, or set of questions, just isn't right. Not that it's all just plain wrong, but rather that, it isn't right as a particular answer to a particular question. I couldn't really tell you, in full, how I come to that, how I think I know that; it's not really just a feeling, but calling it "a feeling" is probably the best I can do insofar as attempting to tell you what the experience of it is like for me.
I know this sounds pretty cheesy, but, regarding the theory being argued for, and similar attempts to theorize the evolution of human behavior: something about it, and them, just doesn't seem right to me, doesn't "ring true" to me.
Well, I was pretty much right. And even right in the right way
. ( Read more...Collapse )
I probably worked 20 or 30 minutes on this, to get it just right – I didn't even set out to do it, I just got sucked in – but the moment I placed the last rock, I realized what I had just done. Which I find pretty funny.( It's by colour, too...Collapse )
I think that Pink Floyd might just be my all time favourite band. I seem never to tire of them.
I don't know if it's coincidence or if this is a topic currently circulating, but I've noticed more than a few mentions in the past couple of years of something about dogs being evolutionarily descended from wolves. What people then take from that is that, dogs are inherently similar to wolves, so what's true of wolves is also true of dogs.
This frustrates me; it's such an utterly fallacious move, and the things people end up claiming because of it are so obviously wrong if you just actually spend time observing dogs. Case in point: diet and eating habits. Wolves are predators, hunters, and they hunt in packs, not alone, which makes hunting a group
activity. There are two important things that follow from this: wolves are flesh
-eaters and they share
the kill (the meal) – though, being a strictly hierarchical community, the sharing satisfies that order.
Now, I'm happy to admit that a lot of what I say here is somewhat speculative, not that scientific, and is based mostly on thinking about my own experiences with dogs and what they're like, experiences of others that they've shared with me, and just using my intellectual abilities for critical thinking and reasoning (and I know that's pretty vague, but I do want to take advantage of that and use it as an umbrella term for lots of types of non-deductive reasoning). I suppose I could also say that I'm using some way of thinking that I'm not sure how to categorize or really describe, but the result of it is that, I seem to do a fairly good job of figuring out how to understand other animals, so long as I'm able to spend enough time observing and interacting with them. I was able to do it with the ducks, which is something I never
would have previously imagined I could or would do. And that was only in a couple of years. I've spent almost my entire life around dogs, I love dogs, and when I spend a significant amount of time with any particular dogs, such as living with them whether I own them or not, I can't help but care for them a lot and give them a lot of attention. But of course, if you're going to care
for a dog, you have to pay
them a lot of attention in order to understand what they want and need and don't want and don't need. So, a lot of what I say here might be somewhat speculative and not very scientific, but at the same time, I'd argue that it's based on a lot of experience with and observation of dogs.
One thing it's important to understand is that, dogs didn't just happen
to descend from wolves all on their own: dogs, so the theory goes, are the evolutionary result of wolves being domesticated by people. This means that dogs never existed in the wild
; dogs are inherently
a domestic species. One of the most crucial practices in order to successfully domesticate animals is providing them with food. This means that dogs were never
hunters, and so, they were never flesh-eaters. It is highly implausible that people a long time ago would have fed their dogs, or even their wolves before the biological transition had occurred, an all meat diet, or an all flesh diet. (The difference between 'meat' and 'flesh' in this context, and thus, the difference between 'meat-eater' and 'flesh-eater', is that 'flesh' refers to the whole carcass, because predators don't just eat the meat, but they eat skin and hair, most of the internal organs, bones, cartilage, and so on.) It's highly implausible because unless you were very wealthy, meat has always been expensive in one way or another – I don't mean just monetarily – and thus not usually easy to get in high quantities. Meat would have been for people first
, and then if there's some left over, that would have been given to the dogs. So, what would the dogs be eating if not meat? Basically, whatever
people were willing to give to them. Thus, from the very beginning, dogs have been omnivores
just like us, because, from the very beginning, they were eating what we gave to them. ( Read more...Collapse )
A very common claim in evolutionary biology is that altruistic behavior is really just selfish behavior in disguise. We used to consider selfish behavior in the evolutionary context as just having to do with self-preservation, in the sense that, each individual organism will do whatever it takes to keep itself alive in order to create offspring and thus pass on its own genes. At some point that thesis was altered and the concept of 'self-preservation' broadened to include genetic relatives; because if what matters for self-preservation is the passing on of one's own genes, then of course, an individual might be willing to forgo the opportunity to procreate so long as at least one other who is a close genetic relative either does procreate or still has the opportunity to. And this is supposed to be, if not the underlying drive for the mechanism of evolution, then it's at least a major one. So, we end up with the idea that organisms are driven by the need for self-preservation, which just translates into preservation of one's own genetic line, so anything that looks like altruistic behavior is really just an organism being selfish for its own genetic line.
Why doesn't anyone notice that this completely misses something that's hugely important for the survival of a population and of a species? Namely that, genetic diversity of the gene pool is crucial, if not necessary for so very many species.
Sometimes I feel like the majority of these scientists – and those of many other fields – spend so much time looking at the details that they've completely forgotten about the bigger picture.
Now, I myself spend a lot of time focusing on and honing in on and learning about and analyzing details, details, and more details, for whatever topic of interest has got hold of my brain. And anyone who's had enough conversation with me, or read enough of my philosophy entries here, knows that I will give an enormous amount of attention to details, because I know they're important, and most people get annoyed or lost by all the details I might bring in. But it boggles my mind as to why it's so hard for people to see all of the details and to see the larger picture, too, and see how all those details are related to each other and to that larger picture. It also equally boggles my mind as to why it's so hard for people to see the larger picture and see all the details of it, or even be willing to see the details. (I'm astounded by how many people just don't care to get into the details, even about things they think are really important and are things they espouse.)
I'll have more to say about all of this...
(Just to be clear, this is all fairly off the cuff.)
When I was in high school – now seventeen years ago – I did a "report" on David Berkowitz. (Do they still do
"reports" in school anymore? Or did they get rid of that, too?) My classmates were horrified and puzzled as to why on earth I'd choose to read, write, and present about such a topic; my teacher was initially worried, I think, but when my work was done, she was impressed.
If you don't know who David Berkowitz is, he was a serial killer in the 70s, and was known as The Son of Sam and The .44 Caliber Killer. He targeted young women, but he was no Ted Bundy. In some ways, Berkowitz was just kind of pathetic
I felt sorry for him. It was clear to me that this was a man who really hurt
There's one thing from doing that report that I'll never forget. A drawing by Berkowitz. My memory's pretty faulty, but I think he drew it not long after being sent to prison. It looked like a child's drawing. It was of a man, practically a stick figure, inside of a box, just four lines connected sort of box, and around that was another box, and around that one, another box, and another, and another, and another, and another...
It was haunting.
Because I knew exactly
what it was that he felt that that picture depicted.
I've never been able to find the picture again – not that I've really tried all that much – and the book I'd read that had it in it most certainly had too generic of a title for me to remember which one out of all the other books on Berkowitz that it was.
Actually, you know, I know how faulty and screwy memory is, so, hell, for all I know that picture was drawn by somebody else. Somebody no less "evil
", and no less messed up inside. Because Berkowitz wasn't the only one I'd read about back then; it's just that it started with Berkowitz. Not serial killers; that's not really what interested me.
What interested me was … well, a puzzle
I'd picked up this other book – it was either a little later in high school, or early in college – this was a book I'd purchased – written by a psychologist, discussing a handful of cases of his, and what all of these cases had in common is that they all did terrible things, were all ethically depraved, and they all basically manifested what many people would call "evil". The author, their psychologist
, wasn't the least bit shy about how awful the things were that they did. But what the book was about was his struggling to understand what made them that way. What made them that way:
he was clear that he didn't believe they chose
to be that way, or that they wanted
to be that way; but rather that, they couldn't help being that way, and more often than not, he didn't think they could (or would?) change. Not that he didn't try
to change them or get them to change. (As a good scientist, you keep in mind that you're fallible, so you might very well genuinely believe that something isn't going to work, but you do it anyway. Not just because it's your job, and not necessarily because you're hoping you're wrong, but because you can be
wrong, and for a scientist that mere possibility is all the motivation you need to do science
, regardless of what you believe. (And if this were a different type of post, I'd tear into what the hell doing science
even means, but this isn't that kind of post.))
It wasn't just
about trying to understand what made them that way. Because perhaps the more important part, the more pressing
issue, is, What do we do with them?
Because if we're going to be honest about it, the truth is pretty grim: we can't make these people good, or even just okay. Can't
because, … well, we don't even really know why, but that's just precisely the problem, now isn't it? There are exceptions, I know, but they're exceptions
, not the rule, and they're surely few and far between.
This isn't just about criminals. Not even every murderer is the kind of people we're talking about here. ( Continue reading... but I should warn that this is very long.Collapse )
- Tags:biology, brains, ethics, mental disorders, mental illness, neuroscience, phil of biology, phil of psychology, phil of science, philosophy of law, philosophy of mind, psychology, scientific reasoning
(Continued from Part I.)Law v2.0
At this point, Dr. Eagleman brings up the fact that, twenty years ago, the technology available didn't allow us to see what we can now see in the brain with current technology, and that we should obviously expect technology to continue advancing, so that, in the future, we'll be able to see even more. But this means that lots of cases that today involve neurological disorders would have twenty years ago been cases not involving neurological disorders simply because the neurological disorderliness of those people's brains wouldn't have been visible back then. What counts as a neurological disorder depends on what we can see
, which depends on the technology available. But, Eagleman claims, it can't be a just legal system if at one point in time, a certain type of case is just a case like any other, but at a different point in time that same case isn't a normal case and thus must be treated differently.
I could not disagree more!!!
In fact, I would argue that in order for a judicial system to be just, it must
adjust itself according to technological advancements. This is quite obvious when we consider how radically things have changed because of computers and the internet, so that the judicial system had to
adjust itself in order to deal with the changing world. Just look at information and copyright law. It would be absurd
for the judicial system to have refused to adjust in order to be able to deal with these technological advancements. And as computer technology has changed over the years, so has the judicial system had to change in order to accommodate.
Or, take the second amendment: someone had pointed out that, when "the founding fathers" were writing up the Declaration, they certainly did not have in mind and could not even have conceived of today's handguns, rifles, shot guns, semi-automatic and automatic guns, assault rifles, etc. Would "our founding fathers" have thought differently if they could have had such arms in mind? Probably.
Especially given how much they in fact discussed and debated the second amendment to begin with. But the point was to say that, perhaps the technological advancements in weaponry provide a justifiable reason to reconsider the second amendment, how we interpret it.
Here's another example: pharmaceutical laws. And other laws having to do with chemicals, because technological advancements allow us to now do all sorts of things at the molecular level that we couldn't do before.
And now we're facing questions about how to deal with scientific advancements in genetic research.
So, to say that the legal system can't be just if it deals with things differently over the years because of technological advancements is quite implausible, and the opposite seems to be true, that in order for a judicial system to be just, it must deal with things differently over the years because of technological advancements.
But this doesn't mean that someone convicted and sentenced twenty years ago is just screwed, because that's not how the system works, and we know that. We've already built into the system a way of allowing for changes, whatever they may be, that could alter the facts of a past case: it's called the appeals process
Just think: DNA testing
. Do I need to say more?( Continue reading... but I should warn that this is very long.Collapse )
- Tags:biology, brains, ethics, mental disorders, mental illness, neuroscience, phil of biology, phil of psychology, phil of science, philosophy of law, philosophy of mind, psychology, scientific reasoning
These are the younger bunch, who started out as six, but one was lost awhile back, so now they're five.
They love banana so much, I had them jumping into my lap to grab pieces out of my hand.( More...Collapse )
(This is paraphrased.)
How can a non-empirical mathematical model that treats of fictional (even impossible, as far as scientists are concerned,) states of affairs be legitimately scientifically explanatory?
Simply put, how can an entirely false model provide a scientific explanation of something?
Ah, I love this...
Yes, even in my current state I am still doing philosophy. I have to.
I mean, my mind, my brain, can't not be doing philosophy. In a way, it's compulsory, I don't have a choice. I don't do it because I will myself to do it – it's not a free will decision. It feels more like just being programmed to do it. In the worst pain, I'll still do it; alongside severe depression but in those moments when it's not so paralyzingly terrible, I still do it. It's just not a choice.
Anyone who saw me these days would probably think I was a few months pregnant. It's that bad.
I have to spend most of the day feeling a little hungry, because eating an amount that would normally be satisfying increases the pain and pressure too much. If I forget and eat how much I'm used to eating, I pay for it for the following few hours. I stave off the severe hunger with a small piece of cheese here and there, or a small handful of nuts. Otherwise, I have to lightly graze throughout the day. Grazing is how I normally tend to eat anyway, but right now I have to eat much less on each graze.
I'm limited in the clothing I can wear to what's loose and doesn't put any pressure on my abdomen. Even some of my underwear I can't wear because the elastic is a little tighter and thus hurts too much.
This is how it's been for a little over a month now.
I haven't had this much enduring pain from my ibs in quite a long time. Which is kind of funny to me - in that dark and twisted way that appeals to my sense of humor - because leaving Waldron and moving here was supposedly going to be good for me.
Just one of the many things wrong with the "it could be worse" argument is that it's irrelevant, period.
It's like telling a rape victim: "What are you complaining about? It could have been worse! He could have beaten you so hard as to have left you permanently crippled! Or he could have killed you! Quit complaining and take responsibility of your life!"
Neither of those worse possibilities takes away the moral wrongness of rape, and any child could see that much.
It would also be like telling a child whose parents beat him: "What are you complaining about? It could be worse! You're not getting sexually abused. You didn't get your nose broken. You got a few bruises! So what?! It's not a broken arm! Quit complaining and take responsibility of your life!"
The thing is, humans are pretty creative and imaginative, so, yes, we can always imagine things being worse. But that has nothing to do with the moral wrongness and/or injustice or suffering of some state of affairs. There might be a million experiences worse than rape, but that doesn't make rape any less wrong whatsoever.
If I were to steal five bucks from you, you certainly wouldn't think, "Oh, well, it could be worse: she could have stolen six or ten or twenty bucks. So, I guess that makes it okay, and I can't complain." Nobody in his right mind would think that way. (If we were good friends, and you thought you maybe understood why I did it, you might decide to let me off the hook, but you wouldn't think it was okay. Because surely, if I ever did it again, you would confront me about it. Your forgiveness the first time is not the same as your saying that it was okay.)
If you were to buy into this way of thinking, well, then by all means, you should let me steal all your money, because obviously, it could be worse: I could steal all your money and break both your kneecaps.
I could point out other things wrong with the "it could be worse" argument, but frankly, I really don't need to, because the fact that it's irrelevant is enough to kill it.
So there you go.
I will point out this: the "it could be worse" argument is certainly one of the many things said to women over the centuries in order to keep them oppressed. And said, more generally, to any oppressed people to keep them oppressed.
It first occurred to me sometime over this past summer to make a post about this topic, but, like pretty much every other thing I've thought about to post, I just never got around to it. If I were in a better mood, if I wasn't feeling so shitty so regularly these days, I'd take the time to write something more thoughtful and scholarly. On the other hand, if I did it that way, it'd be pretty guaranteed to never get posted, just remaining either as some handwritten entry in my journal, amongst the many others that never get typed, or some unfinished typed document on my computer, again, amongst the many others in that state.
I'm getting sick of the pain.
It hasn't stopped since the first week of November, about a few days after arriving. Not even for a day.
It feels like a bowling ball in my gut. Some days it's not so bad, so it only feels like a half-sized bowling ball. Sometimes it feels like the bowling ball's grown teeth and is gnawing on my insides.
The apartment's on the second floor, so I'm constantly going up and down stairs to take the dogs out - given Raskolnikov's occasional incontinence, I have to let him out quite frequently. My knees and hips are taking a serious beating. Then there's the mysterious issue with my muscles - reaching burning fatigue way too quickly to be even remotely close to what's normal - so my leg muscles hurt a lot, too.
If you've never had real chronic pain, you have no idea how incredibly exhausting it is. And how it wears you down mentally, emotionally, your ability to think, to focus, the mental strength just to make it through a day, just to get out of bed.
It certainly doesn't help when you're already severely depressed and already suffer a serious lack of energy. Not to mention your whole life fell apart a few years ago.
And I'll rant at you for a bit. ( Read more...Collapse )
After the drive across the country – I still have photos to go through – before arriving at my final destination, I crashed with my driver at his place. Where I met his partner. She was very sweet and offering to introduce me to some cool people in their circle of friends so I could make some friends and connections and such. The thing is, I had to tell her that I'm not interested in making any friends.
At the apartment complex where I'm currently staying – in Tampa – I really hate Florida
– there's a couple of ponds where there are some muscovy ducks. ^_^( More photos this way...Collapse )
Because my own words, my own thoughts, are worthless, I can give you only the words and thoughts of others.
Horowitz, R.I., Cullen, M.R., Abell, J., Christian, J.B. "(De)Personalized Medicine," Science 339, 6124 (8 March 2013).
"Personalized medicine is often described as genomics-based knowledge that 'promises the ability to approach each patient as the biological individual he or she is.' This is an appealing description, yet unless clinical, social, and environmental features that affect the outcomes of disease are also incorporated, the current approach may be carving a path to 'depersonalized' medicine, both in its science and its relevance to medical practice."
"The distinct frameworks of these examples – probabilistic … and … deterministic – contain the seed of the current concept of personalized medicine. The former, which estimates average effects in groups of patients, has proven successful in separating useful from useless therapies. However, there are differences in scientific inference embedded in these frameworks..."
"These two ways of expressing the same data lead to profoundly different interpretations. Under the deterministic model, only one of 38 patients would benefit from treatment; but because there is incomplete knowledge, the single patient cannot be identified. Under the probabilistic model, all who took propranolol have a 26% reduction in the risk of dying after a heart attack; however, it is unclear who benefits – and there is a degree of belief about an outcome that is stochastically, not biologically, determined."
"Neither the probabilistic nor deterministic framework is independent of the other. … even though the molecular mechanism is known and a targeted treatment applied, we cannot 'determine' the desired outcome for every patient nor know whether treatment failures are due to stochastic processes or incomplete biological understanding."
"[A]t the core of clinical medicine lies substantial variation in clinical response linked to the extraordinary heterogeneity of individual experience." (My emphasis.)
"In many instances, environmental features dominate genetic ones. For example, in cardiovascular disease, traditional risk factors (such as blood pressure and serum cholesterol concentration) are more influential than genetic factors (alterations in genes associated with the condition) for predicting disease susceptibility. Yet even traditional risk factors are insufficient to capture the complexity of human experience. A patient that experiences acute stress provokes certain homeostatic adjustments that affect that person's risk for disease; likewise, someone suffering from chronic stress could experience an array of certain allostatic adjustments in his or her physiology that could have deleterious health effects."
"The tendency to focus on statistics for the group rather than the individual clinical features of patients is one factor [threatening to create a path to 'depersonalized medicine']. Another is the neglect of social and behavior features that have been long disparaged as 'soft' in measurement (because they often rely on subjective reports and physician assessment). The failure to give suitable weight to clinical variation is not the fault of the statistical paradigm any more than it is the fault of the molecular orientation of contemporary medicine. The problem lies with the atrophy of clinical science. Physician investigators whose clinical knowledge equips them to create the needed clinical taxonomies have been distracted by quantitative models or reductionist science." (My emphasis.)
Okay, seriously, why am I obsessed with this right now? (Thanks, Radiolab. Thanks.
The world's longest continuously running scientific experiment, The [original] Pitch Drop Experiment
, still going strong since 1927! (Well, umm
... because we've managed to miss it drop on every one of the eight previous times...)
- Mood:trying to stay distracted
A very strong and symbolic act that I can actually say I'm proud of our country doing. Gabon and the Philippines already destroyed their ivory stocks. Let's hope that other countries will follow suit.
On the one hand, it's hard to feel okay about destroying some incredibly beautiful and amazing ivory sculptures, especially very old pieces that have some historical and/or traditional or cultural or even spiritual/religious significance. But on the other hand, if we hold onto them then we are promoting the idea that those pieces are extremely valuable, which means we are reinforcing the market for ivory and reinforcing the poaching. So long as ivory is kept in museums, for example, then we are declaring it to have a certain kind of value, and so long as it's deemed valuable, there will be a reason for people to seek it out, a reason for there to be a market for it, and thus a reason for people to keep poaching elephants for it.
I'll be the first to say that it's deeply unfortunate that all that ivory, all those beautiful pieces, were destroyed.
But this is one of those cases in which two or more ethical principles conflict with each other.
This is one of those cases in which we have to decide between two values which is the greater ethical value.
I think if I were me ten years ago, I would have a different reaction, and have a different view on it. I would certainly think it was wrong to have destroyed all that ivory, because art is a very important value. And I would probably think, if we end up killing off all the elephants in the name of art, well, maybe it would be sad, but, oh well.
I do now take a different stance on the issue of our killing off species, but what I know and understand better now than the me of ten years ago is that this isn't just about elephants. It's also about greed, and about power. And it's about the existence of black markets of a certain kind, the conditions and the consequences of those black markets.
Now, saving elephants is definitely a good thing, but whatever you might think about the saving of elephants, getting rid of another black market of one of the worst kinds should be a good enough reason alone to be supportive of the destruction of ivory. It's sad, yes, but it's time to let go, and embrace and fight for ethical principles more valuable than pretty objects.