A leaf, tired, releases herself, falls, and breaks the glass surface of the water.
- Tags:my life
I have never understood how men for thousands of years have been able to completely and utterly miss the fact that being abusive and oppressive, being violent, makes you a coward, not a man. (I mean, unless you want to say that 'being a man' inherently involves being a coward…)
(Here's an interesting historical fact for y'all: in hunter-gatherer communities, women were always treated as equals, treated with the same respect and dignity that men gave each other. It wasn't until humans invented agriculture and made the switch to agricultural communities that men started to oppress women and treat them as objects (property).
Here's another quite interesting follow-up fact: studies strongly suggest that the best social environment for raising children – for their health, both mental and physical, for their behavior, for their education – is hunter-gatherer communities. Hmmm… makes you think, doesn't it?)
(No, I haven't seen the movie 'Mongol' that I saw mentioned somewhere. Never heard of it before, don't know about it.)
Inspiration Porn, Part Two
, continued from Part One
I am sort of picking up where I left off, but then taking things into a bit of a technical discussion on some things that perhaps might seem tangential and not all that relevant to the main issues. Bear with me for a bit, as I think they are very relevant. (Because I am meticulous and thorough, and because these things get at some of the deeper and more metaphysical issues.)
There is an obvious sense in which a person missing both arms, or who is blind or deaf, or is unable to walk, has or is of a condition that can be medically defined and characterized as atypical in a way that limits and/or restricts capabilities that are considered typical. We could
take this to characterize what it means to be disabled
. However, such a characterization is inadequate for addressing the ethical, social, and political aspects and implications that we take disability to have. So disability must
be understood within the larger social, political, and historical context. The last is relevant for technological reasons. For example, I am extremely nearsighted; and were the technology to make glasses strong enough, as well as contacts, not available, I could not have been able to do the majority of things I have done in my day to day life over the years. But because of the historical context and thus the technology available to me, my extremely myopic eyesight has never been a disability for me. In a different context, I would have effectively been blind.
Blindness seems to any sighted person as an obvious disability. But what is it that truly makes being blind a disability? Being blind in a world of sighted people.
Now let's be very careful here, because I am certainly not
saying something as idiotic as, "disabilities don't really exist because they're just pure social constructions."
So, let me attempt to clarify and explain; but it will take a few inferential and theoretical steps.
One could try to state what might seem to be
my position as, there is a biological / medical aspect to disability and
a social aspect; or, put slightly differently, part of being disabled has to do with something biological / medical, and part of it has to do with something social.
But that is not
my position. Because my position is far more complicated than that. Because my position is being informed by what I have been learning from evolutionary biology, genetics, and their intersections with the study of human behavior. And one of the conclusions that we have come to – by 'we' I mean something like the collective embodiment of scientific knowledge, if that makes sense – is that we must abandon our previous way of dichotomizing the world into 'nature' and 'nurture'. From which it follows that thinking of our world as dichotomized into 'biological' and 'social' is fundamentally misguided and can no longer make sense. There is a direct similarity with our abandonment of the 'gene / environment' dichotomy: it no longer makes sense to speak of either as distinct and isolated from the other, for we now know that there are constantly going on all sorts of complex gene-environment interactions
. In the same way, the biological and the social interact in complex and interesting ways that blur the line between the two enough that it would be appropriate to say that they are, metaphorically speaking, entangled
Let me give you a simple example of how this can be. The Wellesley effect: take some random women and have them live together, and within a few months, their menstrual cycles will synchronize. (Obviously, the women have to still be
menstruating. I would also think that being on birth control pills would prevent one's cycle from shifting.) Now, you have to appreciate just how kind of amazing that is all by itself. There is, of course, a biological explanation: pheromones. However
, that only tells us about the biological mechanisms
that allow for such a thing to take place; but it doesn't actually fully explain what occurs. Because: it's not random who synchronizes whom
. You see, within the group of women living together, for some, their cycles will end up shifting dramatically, while for others, their cycles drift only slightly, or not at all. But what accounts for that difference? It is the social
structure amongst the women: those who are more extroverted, outgoing, socially dominant
, will synchronize to their cycles the cycles of those who are less extroverted, less outgoing, less socially dominating; in other words, the cycles of the latter will shift to be in line with the cycles of the former, so that if their cycles start out "opposite" to each other, so to speak, then it is those women in the latter category who will experience a dramatic shift in their cycles, while those in the former will experience only a slight shift or none at all. The same effect is observed in all other primates, as well as a lot of, if not all, mammals. (They all have social hierarchies that aren't too hard to figure out after some observations and/or experiments with the social group.) So here we have an amazing and very clear interaction between the social and the biological, with the social having a causal effect on the biological.
I think to really start to appreciate just how deeply connected the social and the biological are, how much they interact, and thus, that we really cannot pull them apart and take them as distinct from each other, you have to realize that the biological needs the social in order to survive
. You have to take that very seriously and not underestimate just how important a point it really is. (Forgive me for speaking a bit metaphorically here, with my use of 'the biological' and 'the social'; but I think you know what I mean well enough, and those terms are just easier.) Let me explain what I mean.( Read more...Collapse )
I am having a really
hard time understanding why this is so hard to understand!
Back in 2006 while I was dating a guy whose son had been diagnosed with ASD, I took it upon myself to research and read and learn as much as I could about ASD, because I didn't know anything about it, and I was going to be interacting with his son on a regular basis. From what I had read back then, and from several conversations with the guy I was dating, who of course had read tons of stuff on ASD while raising his son, as well as meeting and talking to many parents of ASD kids in his local community, I pretty much immediately concluded that the rise in ASD diagnoses was obviously
due to changes (expansions) in the definition and in the diagnostic criteria, as well as due to ever increasing awareness about ASD amongst parents, teachers, and doctors. It couldn't be more obvious.
No, really, it couldn't be more obvious
It's plain and simple: the frequency of ASD is not
increasing. The only thing increasing is the number of diagnoses
, for the reasons stated above. The same goddamn thing is true for AD/HD.
And look! Reviews and studies checking all the relevant facts show that the obvious is indeed true: http://www.theskepticsguide.org/explaining-the-rise-in-autism-diagnoses
What the hell, people?
This isn't rocket science. A child could figure this out!
I am getting so sick and tired of hearing about the supposed rise in autism, that more and more kids these days are getting autism than ever before. I just want to beat my head against the wall, or start ripping my hair out. I am getting so tired of this infectious idiocy!
Do people not listen to themselves when they say that vaccines cause autism and doctors and governments know it but they're hiding it, lying about it? Do they really not notice how utterly ridiculous that is? So many facts, so many facts
, disprove such claims! (Oh, not to mention that the original "study" supposedly proving the link between autism and vaccines was a complete fraud
, proved to be a fraud, and the science/medical journal that published the original paper retracted it when that fact became known. Do your fucking homework, people.) Do they really not notice the striking similarities between what they keep claiming and the wild delusions of a paranoid schizophrenic???
Yeah, vaccines are obviously intentionally dangerous and toxic, because doctors and governments are clearly on a mission to poison all your children…Riiiiiiight…
And I'll sell you the Brooklyn Bridge…
I wish that I could tell you these things, but I know you won't listen to me.
I always find it rather shocking when you accuse me of being closed-minded when you're the one who doesn't ever want to have a real discussion about anything – I'm always the one opening up the conversation, and you're always the one closing it just when it was about to get off the ground. You say you respect my intelligence and that I'm always making an effort to learn things, and that you don't know anyone else who knows as much as I do about so many different things and so many different fields of knowledge, and who bothers to try to stay relatively up to date on them. Not to mention how much I've learned and had my mind opened to by traveling to and living in some very different places, and by talking extensively with very different people from all over the country and all over the world. And yet you seem to think you have nothing to learn from me.
Well, since I can't tell you these things since you won't listen, but since I can't not say them, I'll just have to settle with saying them here.
You say you want your dog to bark, to warn you about things; you say you want him to be a guard dog. You say that's what his breed was bred for in the first place, to be guard dogs, protectors, so it's just in him to be a guard dog, so it's just in him to bark at everything.
Funny that you say the opposite about pit bulls when you defend them against prejudices that they're vicious dogs ready to snap at any moment because they were bred to be aggressive because they were bred to be fighters. In defense of pit bulls you're ready to argue that, for the most part, any particular pit bull that is aggressive is so because of the way s/he was raised and treated by owners, because there are plenty of pit bulls that are never aggressive because they were treated with love and care and respect.
Furthermore, just because some breeds of dogs were bred specifically to be guard dogs does not mean it's automatically okay or a good thing for dogs to be guard dogs – 'okay' or 'good' for them, let alone for us. That's fallacious reasoning: perhaps, in the very least, some combination of the naturalistic fallacy and the fallacy from tradition or history or antiquity, i.e., this is the way people have always done things, so it must be right. Once again, your defense of the pit bull provides a perfect counterexample. And further, a dog that barks like crazy and goes nearly ballistic on the couch at the window, and can't stop barking and whining on command, can't calm down on command, does not exemplify what it means to be a guard dog.
It's also a terrible idea to attempt to justify something like this about animals based on the past, because it used to be widely believed that animals didn't feel any pain whatsoever, could not suffer, had no mind, no feelings, and I would not doubt that some people didn't even consider them to be genuinely alive. That means, the way people treated animals in the past, what they used them for, etc., was all based on seeing them as pure objects for our use in however we desired to use them.
But anyway, regardless, you say you want your dog to do what his breed was bred for, to be a guard dog, so you want him to bark at things. So you encourage him to bark, you let him bark, you positively reinforce the barking, giving him positive attention, and he's happy to have pleased you, because that's all any dog wants to do, please his/her master.
Except that you don't see what a disservice you are doing to him. You don't see that he's a complete paranoid wreck, with unhealthy levels of anxiety. You don't see that he spends a lot of his time on edge, unable to relax. You say he just has a sensitive stomach, that's just the way he is; but I say his stomach and digestive problems are most likely due to the extreme stress and anxiety that he's constantly experiencing.
I know you love him dearly, and care so much for him – you wouldn't spend the money and time you do cooking for him every night if you didn't. You wouldn't let him have total freedom to all the furniture and to your bed if you didn't. You wouldn't take and post online so many pictures of him if you didn't.
But that's just why it hurts me so much to see you so blind to his unhealthy paranoia and anxiety. I know you love and respect animals – another way in which we're a lot alike. But having love and respect and care for someone doesn't automatically mean you understand him or her. But shouldn't you need to if you're going to care for him or her in the right ways and give him or her what he or she needs?
So maybe you want a guard dog. But what about what he wants?
Inspiration Porn, Part One
I am truly getting tired of seeing it. I suspect I'll probably upset some people, but I wouldn't be a good philosopher if I didn't. I think
that people who post and spread inspiration porn do care about people and want to help other people; but if that's the case, then such people had better be prepared to listen to the other side that argues that it is actually damaging
to people, and is doing a serious disservice to everyone. Not least because it presumes a false (outdated) theory of psychology, but I'll get to that. First we need to talk about being disabled.
The term 'inspiration porn' was coined by comedian Stella Young – recently passed unexpectedly – who was born with the genetic disorder osteogenesis imperfecta, and thus spent most of her life confined to a wheelchair. 'Inspiration porn' refers to any images and videos – and there are tons
of them all over the internet – of disabled people doing something or other, and the images and videos are presented as being "inspirational"
to non-disabled people. The basic message is: if this disabled person can manage to do [whatever], and being disabled is obviously far worse than anything else, then all of you who aren't disabled have no excuse and it's just your own fault for not doing great things and accomplishing anything you want to. Stella openly railed against these images and videos and their messages. To start with, they are objectifications of one group of people for the benefit of another group of people; hence her calling them 'porn'. I do recommend her Ted talk – don't worry, it's not even 10 minutes long – I'm not your inspiration, thank you very much.
To make any sense as "inspirational" in the way it attempts to be, the underlying premise of inspiration porn is that being disabled is a bad thing, i.e., that being a disabled person is inherently worse than being a non-disabled person. Now, I know, I know, that just seems obviously true, doesn't it? Well, that is precisely what Stella means by telling you that you need to question and completely rethink what you think you know about disability. (So if you didn't listen to her talk, do that before reading any further.)
Let me try to make the point by making a bit of a comparison: it wasn't that long ago when it was obviously true to people that being a woman
is inherently worse than being a man. (And in some parts of the world, people still believe that.) I do not think it is an exaggeration at all to say that it used to be the case – and in some places still is the case – that just being a woman is to be disabled
. If you are inclined to think that the difference between that and disabled people is that being disabled is a genuinely biological
thing, then don't worry, because they had their biological arguments for how and why women were obviously inherently worse off and basically disabled. And they had plenty of evidence to draw from that women were not capable
of doing all sorts of things. The same was said for all non-whites, of course. (The social bit is coming, so just keep reading.)( Read more...Collapse )
Philosophers like Dennett push the scientistic view in the philosophy of mind that, look, neuroscience explains everything we want to know about the mind – or rather, neuroscience is on the way toward that goal, but it already explains a whole lot, and eventually
, it will explain everything we want to know about the mind – so we should just forget about all the traditional debates in the philosophy of mind, because they're outdated and esoteric. That's an oversimplified and exaggerated way of putting it, but you get the basic idea.
I am certainly on the side of science, and I certainly think we should be paying attention to neuroscience if we are digging into trying to understand the mind – whatever that
is. But despite that such philosophers often claim that the science has informed their philosophical positions, it seems to me more likely that they look to the science in order to find ways of fleshing out and justifying a view they've already decided they think is the right one. I can't justify that claim, of course, since it's just something I suspect; but I suspect it mainly because, if one were really
committed to letting the science inform your philosophical positions, then you would be changing your position rather frequently to correspond with the ever changing body of data, discoveries, experiments, studies, findings, and theories in neuroscience. Furthermore, you wouldn't argue that some view is the right one because that's what science demonstrates; but rather, you would argue for a view that is highly reasonable
based on current
scientific evidence, but that must remain open to change
, even radical change, as new evidence comes in. However, if you were really
committed to the science, then you would have to be willing to admit that your view is not
based on neuroscience
, but is based on the work and claims of particular neuroscientists
, with whom other neuroscientists might (or do) disagree.
I think an excellent example from which a very serious lesson should be learned is mirror neurons. Remember all the hype over mirror neurons? And that they were going to explain a whole ton of things? Like, how we learn how to do all sorts of things, how we learn language, how and why we, either consciously or unconsciously, imitate others, how we understand others, and how we develop theory of mind in general, how we have empathy, just to name a few; furthermore, "broken mirrors" were going to turn out to be the cause of autism / autistic spectrum disorders, and perhaps others. Many philosophers jumped on the mirror neuron bandwagon – oh, why do I find that hilarious?
– and built all sorts of theories based on them. Except
, at the time, they paid no attention to the scientists who were openly skeptical about mirror neurons – not skeptical of whether they exist or not, but skeptical of all the claims being made about them. And except
, the whole mirror neuron thing has fallen almost completely apart: while mirror neurons do exist, they don't actually explain much at all, and it turns out they're pretty damn boring in comparison to everything they were initially hyped up to be. At this point, referring to mirror neurons as a basis for some claim about the mind or brain would only reveal how foolish and behind you are.
(How boring are mirror neurons? You would think that, in the very least!
, they would explain how and why it is that we, consciously or unconsciously, imitate others around us. And imitation is actually very important for humans: most obvious to us is the imitation young children do, which allows them to learn all sorts of useful things; but we as adults do it far more than we're aware of, and this plays a vital role in social interactions, as well as in the explanation of how we're even capable of social interaction that has allowed us to develop such complex and far-reaching societies. But mirror neurons can't even explain imitation. They fail to explain imitation because the very individuals in which they were first discovered and studied extensively don't imitate
: the lack of imitation
is one of the most significant differences between monkeys and humans, and it was in monkeys that mirror neurons were discovered and extensively studied. (Yes, that's correct: monkeys don't imitate.))
One part of the lesson to be learned from mirror neurons is that the brain really is
much more complex than we keep wanting to think it is. The thing that so many scientists, and philosophers, found so appealing about mirror neurons was how simple
of an explanation they would provide for several different functions and cognitive capabilities. If
, that is, they were anything at all like what they were first hyped up to be. Since they're not, they don't explain all those things, which means that we still don't understand how the brain does all those things.
And this is exactly the point that not enough philosophers appreciate: if you actually stay on top of current neuroscience, you would notice that, in spite of how much can
currently be explained, there continues to be a whole lot that can't
be explained – not 'can't' in a strong modal sense, just 'can't' with respect to current theory and evidence. And you have to appreciate what consequences follow when we find out we were wrong about something, because neuroscientific research, findings or facts are not isolated from each other. Just consider what it meant, for all the research done and papers published that were based in some way on mirror neurons, when neuroscientists eventually figured out that mirror neurons were nothing like what they initially thought. An even bigger deal with far more consequences: when we discovered the plasticity of the brain. Or, when we discovered that brain cells do
All right, bear with me here: I promise this will all come together and make some sense. Part of the challenge for me here, in trying to say all that I want to say, in trying to make and argue the points I want to, is that, what I want to say is the result of my drawing upon a whole bunch of things, a variety of things, and making all sorts of connections between them. And when I say a 'bunch' and a 'variety', I really mean that; I wouldn't doubt that, if I fully laid it all
out, it's enough to write a book. Which is a bit frustrating, actually…( So, bear with me, and read more…Collapse )
"…a billion times less than a trillionth of a second."
The Earth gets one proton's width closer to the Sun every year.
I just found out that a Jamaican guy I know was basically cured of his epilepsy at the age of twelve by smoking marijuana. He suffered grand mal seizures almost everyday, despite being on medication for his epilepsy. One day, when he was twelve, a good friend of his pulled out a joint and invited him to smoke. After smoking a few times over the next couple of days, he noticed that he hadn't had any seizures those past couple of days. It couldn't have been the medication because he'd stopped taking the pills. His mother suspected he was smoking weed and confronted him about it, and of course at first he lied, but she figured it out and he confessed. But she also noticed he hadn't had any seizures those past couple of days, and knowing that he'd stopped taking the pills, she immediately suspected that it might be due to the marijuana. So she let him smoke it. He has been smoking it everyday since, and he's never had another seizure. He's now 41.
Good for his mother for being open minded enough to immediately consider that the marijuana might be helping him, when she otherwise would have been utterly against his smoking it.
Here is an excellent example of skepticism at work revealing some pseudoscientific bullshit, i.e., fully flawed reasoning
, that got very wide media attention. This is also an excellent example of skepticism revealing an utter failure of so-called "expertise"; in other words, someone claiming to be some kind of expert who is very much not
an expert and who really doesn't know what the hell she's talking about. But she sure as hell fooled a whole lot of you out there.Eating Yoga Mats
I really urge y'all to read through this carefully, and the comments are worth reading, too, especially for how some of them instantiate exactly some of the same kind of pseudoscientific thinking that is being criticized.
It really makes me wonder: for all those who keep insisting that this or that is dangerous despite all the scientific studies and evidence for their safety
– such as vaccines, just to throw another random example out there – if all that scientific evidence doesn't convince you, then what would
would convince you, then what is the difference between your beliefs and the delusions of a paranoid schizophrenic?
I do think that many, but not all, people who keep insisting on the dangers of this or that, in spite of scientific evidence for their safety, do believe
that they have scientific reasons to think such-and-such is dangerous, or at least, that a scientific basis can be found for their beliefs. But if that is so, then they would be agreeing to the legitimacy of scientific methods and scientific evidence and scientific practices, etc. But why, then, do they still refuse to accept the scientific evidence that such-and-such has been shown to be safe?
The usual objection seems to be that, somehow, in some way or other, all of that scientific evidence is corrupt. Well, on what evidence or reasons would one think that? Corrupt
is a rather strong accusation, one that is creeping over into conspiracy theory territory
The thing is, you can also just admit that you have decided to adhere to a particular ideology, and that's what you've decided you're going to stick to no matter what. Because that's what feels right
to you, which basically comes down to an emotional
basis for your beliefs, not an evidential and scientific basis. And it's okay
if that's the case, because based on our own biology, we are
, "against our will", emotional creatures. As long as you don't push your beliefs onto others and pretend that there is a scientific and evidential basis for your beliefs and ideology, nor attempt to tell others that they are wrong if they disagree with you, then it's perfectly okay. It seems to me that we would all be better off if people were just willing to admit when they're actually basing their beliefs on emotions and ideology rather than evidence or science or reason.
It's not as if emotions are always unacceptable as a basis for something. When you love someone, if emotions weren't
the basis, we should think there was something very wrong with you. When you find something to be deeply morally bad, we should think there was something very wrong with you if you didn't
feel a strong emotional reaction. If the music you claim to love didn't
emotionally affect you, we should think there was something very wrong with you. If there is something that gives meaning to your life, we should think there was something wrong with you if there was no emotion in that meaningfulness for you.
So long as you know where to draw the line between deciding what's right for you and what's right for others: you don't get to decide what's right for others when what you believe is, at the end of the line of reasoning, based only
Picking up where I left off:
On the other hand, we have to remember that scientists are human
, and are thus susceptible to every human emotion that you are. Scientists do
sometimes slip into the common human feelings of 'interesting' and 'exciting', and one of the consequences is perhaps one of the worst types of bias: bias against null findings
. A very unfortunate fact is that most scientific research, studies, and experiments that come to null results are never published. In fact, many never even reach the stage of being submitted to journals! Many scientists assume right off the bat that journals are unlikely to publish null results; some
biased against publishing null results, but there are many that aren't. Sometimes the work is even abandoned before anything can be written up because the results were null. Keep in mind that there are, however, many different reasons why null results might lead to a study or experiment being abandoned; so one has to be very careful not to assume that it's just because the scientists themselves were looking for a specific result. Don't forget that no science can be done without someone footing the bill.
Take notice that, in this case, the bias is not necessarily referring to something going on in the publication process that leads to the bias; it's rather referring to the situation that there is bias against null results in what ends up actually getting published, but without any reference to specific causes of the situation.
The bias against null results is one of the worst offenses in the sphere of medical science, and that goes for research into both modern and alternative medicine. This kind of bias leads to the situation in which, for example, most or all of the scientific literature on the efficacy of some alternative medicine treatment show positive results, from which it would be concluded that that alternative medicine treatment has been "proven" to work, because all these studies show that it is effective. The problem is that we have no idea how many studies were completed that show it was ineffective, or how many were abandoned because they weren't showing anything. (To be fair, this is certainly a problem for modern medicine, too, and not just for alternative medicine. And accusations of intentional
biasing, especially against null results, can justifiably be made against some
"players" on both sides.)
In fact, for treatments that don't work, the scientific literature often shows a heavy bias in favour of the treatment as effective
. How can that be? Well, there are few things going on. ( Read more...Collapse )
I had an interesting hitchhiker this morning while letting my dogs out. A butterfly nearly flew into my face; he fluttered in front of my face for a few seconds, and then landed right on my shoulder. He hardly even budged when I moved some of my hair – a few dreads, that is – from off my shoulder; neither did he have any reaction to my coughing a few times. I walked over to the pond, around the pond, stopped to say good morning to the ducks and see how they were doing; walked over to the other pond – at this point I was all the way on the other side of the apartment complex from where our apartment is – said good morning to the ducks over there; and eventually made my way back to our building. The whole time he just sat quite comfortably on my shoulder. I tried to coax him off and onto a bush, but he stubbornly refused at first. I finally convinced him, but the particular leaf he was trying to get onto was wet and slippery, so I had to give him a bit of a push up from behind.
He seemed to clearly have decided right away that I was friendly, i.e., not a threat, not even with my giant arm and hand coming near him to move and keep my hair out of the way a few times, not even with the sound / vibrations from my coughing and raising my voice to call to my dogs a few times. Makes me wonder in what way he sensed my friendliness. I mean, since I'm not familiar with all of the sense modalities butterflies possess and all of their nuances.
But it was a nice way to start the morning. :D
As mentioned in the previous entry in the context of the discussion on modern vs. alternative medicine, a common tactic is to bring up flaws and biases in the scientific research. I'm going to use 'scientific literature' as a partial replacement for 'scientific research' because the former has the important specificity in its reference to scientific work that has been published
. The importance of this will become very clear below, since it is a significant part of the discussion.
One point I want to start with, before getting into specifics, is that those who use this tactic against mainstream modern science, especially against consensus
therein, seem to bring up these problems regarding the scientific literature as if the scientific community is completely unaware of them. It's as if they think they are making some kind of exposé. But literally every single problem they bring up is already well-known throughout the scientific community. These are problems that the scientific community openly acknowledges, not tries to hide, precisely because there is a widespread and constant effort to try, if possible, to avoid and/or fix these problems as best they can. And for those that either haven't yet been or can't be avoided or fixed, the effort is to keep everyone aware of such problems in order to attempt to minimize them. Because this, too, is part of the very nature of science, and thus another contributing factor to the inherent epistemic trustworthiness of science; for this is part of what significantly helps to make science self-correcting
It is, frankly, naïve to think that the scientific community on the larger scale doesn't take these problems into consideration when it comes to things like acceptance, agreement, and consensus. In addition, scientists have been trained to be skeptical of the work of other scientists when even the slightest thing in that work seems fishy. That doesn't, of course, mean that every single scientist out there is very good at employing the skeptical skills they were trained to develop; but taken as a whole, the scientific community is pretty damn good at it. (If it wasn't, y'all wouldn't have those pretty iPhones in your pockets, and hundreds of thousands, if not millions of women would be dying from breast cancer yearly.)
The point is, when the scientific community – or more specifically, the relevant scientific community for some issue – reaches some degree of consensus on something, that consensus doesn't occur without taking into consideration all of the problems of the scientific literature and scientific research. Reaching consensus is not a simple process, and often involves a lot of time, a lot of work, and a lot of conversation between many scientists, with at least some, if not most of that conversation making it into the scientific literature.
There is a second point to mention before getting into the specifics. The whole "sphere", so to speak, of scientific literature and publishing scientific work is really a very complex, complicated, and context-dependent "thing". I would argue that it is more complex and complicated than people outside the scientific community realize – even more than some inside
the scientific community realize! – and so those who use this tactic against modern science don't really know enough to really know what they are talking about. It's worth pointing out that, while advocating a distrust in the experts, they are claiming (wrongly
) to be an expert on this whole aspect of science, i.e., scientific literature and publishing scientific work. So the point I want to mention is that, the situation is actually worse
than they realize, but it is also much better
than they realize. I think that if they really knew all the details and nuances of the situation, they would probably be more horrified than they might already be. However, that's not the end of the story
– that's the problem with their thinking, they see a bunch of problems and then stop right there without bothering to look any further. To make an analogy here, consider mountain climbing: this can be an extremely
dangerous thing to do, and some people have died doing it; but that's precisely why training is necessary and why there are so many measures that are taken to make it much, much safer
. That doesn't, of course, guarantee that no one will ever get hurt doing it, or even die, but it does significantly
decrease the chances of injuries and fatalities; so that, so long as you know what you're doing, you are much less likely to get hurt or die. I mean, you're still hanging off the side of a fucking mountain, for christ's sake, but it's a safe bet that you're probably going to be fine.( Read more...Collapse )
With all the information coming at us from every direction, how are we supposed to figure out what things are legitimate and trustworthy and what things are bullshit of one kind or another?
To really appreciate the gravity of that question we have to remember and keep in mind something so obvious we too often forget it: we are not the experts, we lack the necessary background information and knowledge that would allow an expert to justifiably decide whether something on a topic within his area of expertise is legitimate and trustworthy or probably wrong.
This statement is most definitely oversimplified in order to make it as general as possible. For, some
of us are
indeed experts in some specific field or other, but only
within that field. So, with respect to every other field in the world for which you are not an expert, the statement applies.
On the other hand, even the expert has to remember that he doesn't (and can't
) know everything within his own field of expertise.
So we need to go into some length about expertise, because it's extremely important, and because there is in our culture a frightening trend of distrust in experts – which leads to utter nonsense, by the way. Although I think – I hope
– the trend may not be as widespread as we think it is. That wouldn't take away one bit how harmful and destructive it can be; I only want to make the point that we should be careful to keep things in perspective. However
, I don't know how widespread the trend actually
is, so I may be wrong, but the situation may be similar to the antivaccination movement. Statistics show that the number of parents vaccinating their children in the U.S. has actually not decreased; which means that, in spite of all the media attention and how many antivaxers are voicing their opinions, the overall U.S. population has not lost their trust in vaccines. The illusion that fewer parents are vaccinating their children is due to two main phenomena: (1) all the antivaccination talk going on all over the place, i.e., all the media attention, antivaxers becoming more vocal, antivax postings all over the internet, etc.; (2) there have been some shifts in who
isn't having their children vaccinated. The antivaccination movement has had the interesting effect of making some parents more aware of the need to have their children vaccinated, while those who aren't having their children vaccinated are now more concentrated in certain sociopolitico-economic circles. The trend of distrust in experts could be
occurring in a similar fashion, but I could be wrong and it's much worse. Regardless, it's a problem nonetheless, and thus requires a lot of attention.
I said that the distrust in experts leads to utter nonsense. Take, for example, the distrust of modern medicine, and placing trust into alternative medicine. ( Read more...Collapse )
After keeping myself rather isolated from the world beyond the borders of academia, save for some especially exceptional exceptions, I have been allowing myself some contact with the outside world. Of course, one has to ask what that exactly means for someone like me
At the moment, it means listening to a few selected podcast shows. Admittedly, it reminds me a little of why I tend to keep myself so isolated from the wider world: I am extremely philosophically and emotionally sensitive, and I mean those two to go to together, not separately. I won't get into explaining that, but I will simply say that the here relevant consequence is that I very quickly become overwhelmed
, in a way that is nearly paralyzing.The Skeptic's Guide To The Universe
I wish that I had known of this podcast years ago. Then again, perhaps it is a good thing to be coming to this podcast after being a well-seasoned philosopher.
Let me explain why I urge everyone
to listen to this podcast.
First of all, if you are put off by the terms 'skeptic
' and 'skepticism
', then you don't understand what they mean in this context
. There is no single definition of 'skepticism'; there are, rather, different types
of skepticisms. The most important aspect of the type of skepticism being referred to here is that it is methodological
. What that means is that this is not a doctrine; it is an approach, a collection of methods. I would argue, and I think the hosts of the podcast would agree with me, that this type of skepticism is ultimately part of the underlying foundations of philosophy and science. Neither philosophy nor science is possible without it. Scientific progress is not possible without it. Perhaps even, human progress
is not possible without it.
One of the main goals of the SGU (the Skeptic's Guide to the Universe) is to teach critical thinking. But there is a focus, a theme: science!
This is extremely important because we are literally
surrounded by science, in the sense of, most obviously, technology, i.e., all the things that science has made possible for us to do and to produce, from the simplest, like the shoes you wear, the roads you drive or walk on, and every plastic or metal thing you've ever touched, to the most sophisticated, like cybernetic prosthetics, nanotechnology, and putting robots on Mars. (Take a moment to stop and appreciate just how fucking amazing that last one really is. In the words of astronomer Phil Plait, who's been a guest on the SGU a few times or so, "we have a one-ton, roving, chemical, nuclear-powered, laser-eyed rover, you know, laboratory on Mars, that we sent there
But there is another significant sense in which we are surrounded by science (or, "science"): information
. We have scientific information – or, information purported
as being scientific – being fed to us from several different media sources and outlets, including all your favourite social networks where people post and repost and share links to stories and articles and blog posts of all sorts.
The crucial question is: should you believe it?
In other words, how do you know, out of all that information, what is legitimate and trustworthy, and what is bullshit? (In the next post, I'll get into why this really matters for everyone everyday.)
But there's a conceptually (and chronologically) prior problem here. And it's here
that is in dire need of attention; it is here that the most important lesson is lacking. (Well, lacking for most
people, but not everyone, thank goodness.)Before one can ask the question, one must know that there is a question to be asked.
(I do mean the above question, of course.)
is skepticism: just knowing to ask
whether one should believe what is being presented to you, what is being told to you. That
is where skepticism begins.
Take notice!: it is not
a refusal to believe something, nor a rejection of something. Except passive acceptance: it is
a refusal / rejection of that. ( Read more...Collapse )
I was just asked to recommend a book that would explain everything, so this person would be able to understand everything after reading it.
I just don't get it. What the hell is wrong with people?
David Foster Wallace was right: Irony is ruining our culture
[Original source: www.salon.com
]David Foster Wallace long ago warned about the cultural snark that now defines popular culture. It's time to listen
By Matt Ashby
and Brendan Carroll
Percy Shelley famously wrote that “poets are the unacknowledged legislators of the world.” For Shelley, great art had the potential to make a new world through the depth of its vision and the properties of its creation. Today, Shelley would be laughed out of the room. Lazy cynicism has replaced thoughtful conviction as the mark of an educated worldview. Indeed, cynicism saturates popular culture, and it has afflicted contemporary art by way of postmodernism and irony. Perhaps no recent figure dealt with this problem more explicitly than David Foster Wallace. One of his central artistic projects remains a vital question for artists today: How does art progress from irony and cynicism to something sincere and redeeming?
Twenty years ago, Wallace wrote about the impact of television on U.S. fiction. He focused on the effects of irony as it transferred from one medium to the other. In the 1960s, writers like Thomas Pynchon had successfully used irony and pop reference to reveal the dark side of war and American culture. Irony laid waste to corruption and hypocrisy. In the aftermath of the ’60s, as Wallace saw it, television adopted a self-deprecating, ironic attitude to make viewers feel smarter than the naïve public, and to flatter them into continued watching. Fiction responded by simply absorbing pop culture to “help create a mood of irony and irreverence, to make us uneasy and so ‘comment’ on the vapidity of U.S. culture, and most important, these days, to be just plain realistic.” But what if irony leads to a sinkhole of relativism and disavowal? For Wallace, regurgitating ironic pop culture is a dead end:
Anyone with the heretical gall to ask an ironist what he actually stands for ends up looking like an hysteric or a prig. And herein lies the oppressiveness of institutionalized irony, the too-successful rebel: the ability to interdict the question without attending to its subject is, when exercised, tyranny. It [uses] the very tool that exposed its enemy to insulate itself.
So where have we gone from irony? Irony is now fashionable and a widely embraced default setting for social interaction, writing and the visual arts. Irony fosters an affected nihilistic attitude that is no more edgy than a syndicated episode of “Seinfeld.” Today, pop characters directly address the television-watching audience with a wink and nudge. (Shows like “30 Rock” deliver a kind of meta-television-irony irony; the protagonist is a writer for a show that satirizes television, and the character is played by a woman who actually used to write for a show that satirizes television. Each scene comes with an all-inclusive tongue-in-cheek.) And, of course, reality television as a concept is irony incarnate. ( Read more...Collapse )
( Read more...Collapse )
Repost from http://melissamaynase.tumblr.com/post/80864152800/vasundharaa-this-is-a-resource-post-for-all-the
This is a resource post for all the Good White Person™s out there. You know, the ones who say things like “It’s not my fault I’m white! Don’t generalize white people!”, or “I’m appreciating your culture! You should be proud!”, or “Why do you hate all white people, look I’m a special snowflake who’s not racist give me an award for meeting the minimum requirements for being a decent human being”.
Well, if you are actually interested in understanding racism and how it ties into cultural appropriation, please read instead of endlessly badgering PoCs on tumblr with your cliched, unoriginal arguments and repeating the same questions over and over.
Original source.IPCC: world is ill-prepared for risks from a changing climateby Liz Kalaugher
The world, in many cases, is ill-prepared for risks from a changing climate. So says the Intergovernmental Panel on Climate Change (IPCC), which today released its working group II (WGII) report, Climate Change 2014: Impacts, Adaptation, and Vulnerability.
"We live in an era of man-made climate change," said Vicente Barros, co-chair of working group II. "In many cases, we are not prepared for the climate-related risks that we already face. Investments in better preparation can pay dividends both for the present and for the future."
Although there are opportunities to respond, the risks will be difficult to manage with high levels of warming, according to the report. In that case, says WGII co-chair Chris Field, "even serious, sustained investments in adaptation will face limits".
What's more, the summary for policymakers says that "the precise levels of climate change sufficient to trigger tipping points remain uncertain, but the risk associated with crossing multiple tipping points in the Earth system or in interlinked human and natural systems increases with rising temperature".
The report details climate change impacts so far, such as changes in the quantity and quality of water resources, shifts in the range of animal and plant species, and altered crop yields, as well as the adaptation measures adopted to date.
"Climate-change adaptation is not an exotic agenda that has never been tried," said Chris Field, co-chair of working group II. "Governments, firms and communities around the world are building experience with adaptation. This experience forms a starting point for bolder, more ambitious adaptations that will be important as climate and society continue to change."Facing the risk
For the first time the WGII report includes a focus on risk, which it says supports decision-making in the context of climate change. "People and societies may perceive or rank risks and potential benefits differently, given diverse values and goals," states its summary for policymakers.
Many of the key risks are particular challenges for the least developed countries and vulnerable communities, given their limited ability to cope, concludes the document.
"Understanding that climate change is a challenge in managing risk opens a wide range of opportunities for integrating adaptation with economic and social development, and with initiatives to limit future warming," said Field. "We definitely face challenges, but understanding those challenges and tackling them creatively can make climate-change adaptation an important way to help build a more vibrant world in the near-term and beyond."
The economic impacts of climate change are hard to pin down. For an additional warming of 2°C, global annual economic losses have been estimated to be from 0.2 to 2% of income, according to the report, but are more likely than not to be larger than this range. And individual countries would see big differences in the losses sustained. The incremental economic impact of emitting carbon dioxide may lie between a few dollars and several hundred dollars per tonne of carbon, depending on the amount of damage and discount rate assumed.Regional variation
WGII has defined key risks, along with potential adaptation measures, for each region. For Africa these are stress on water resources, reduced crop productivity and climate-related changes in vector- and water-borne diseases. Europe, meanwhile, could see increased flooding in river basins and coasts, less water availability and more extreme heat events. In Asia the key risks are likely to be more riverine, coastal and urban flooding, greater risk of heat-related mortality, and more drought-related water and food shortage. Australasia could see problems for coral reefs, more frequent and intense flood damage, and increasing risks to coastal infrastructure and low-lying ecosystems.
North America is likely to suffer increased problems with wildfires, heat-related human mortality, and urban floods in riverine and coastal areas. Central and South America could see increased risks to human health, problems with water availability in some regions, and flooding and landslides due to extreme precipitation in others, and decreased food production and quality. At the poles there could be problems for ecosystems, risks for the health and wellbeing of Arctic residents, and challenges for northern communities. Small islands are likely to see threats to low-lying coastal areas as well as loss of livelihoods, infrastructure, ecosystem services and economic stability. And the oceans could well experience a shift in fish and invertebrate species, reduced biodiversity, lower fisheries abundance, less coastal protection by coral reefs, and coastal inundation and habitat loss.More publications
This fifth assessment WGII report relies on more published literature than its fourth assessment predecessor, which was released in 2007. The number of scientific publications covering climate-change impacts, adaptation and vulnerability more than doubled from 2005 to 2010, with an especially rapid rise in papers on adaptation, according to WGII.
A total of 309 coordinating lead authors, lead authors and review editors, from 70 countries, put together this fifth assessment report, with help from 436 contributing authors and 1729 expert and government reviewers.
• The IPCC's working group III report, on climate mitigation, is due to be released on 13 April.
Despite seeing all these different ways in which the genetic picture is far more complex than we initially thought, and far more complex than the genetic picture on which the traditional evolutionary models rely, if you sift through the scientific literature, you will find plenty of findings of the heritability
of such and such trait, that some trait is heritable
to some specified degree.
Now, the importance of heritability to this particular discussion has to do with the fact that the traditional evolutionary models heavily depend on the assumption that traits are entirely heritable because traits are entirely a result of genes. (Once again, the traditional models teach us that anything even remotely close to Lamarckian evolution is flat out wrong.) So, one would think that all the scientific findings that purport to show the heritability of such and such trait would give support and favour to that assumption.
The problem is, the words 'heritable' and 'heritability' don't mean what you think they mean. And they don't mean what the traditional evolutionary models would need them to mean. At the end of this entry, I go into more detail regarding the theoretical consequences of what I discuss here on the traditional evolutionary models. My apologies that the perhaps most important point of this entire entry is all the way at the very end, but there is no way to fully make the point, to allow my dear reader to fully understand it, without first going through a few things.
So the notion of heritability
requires some unpacking here, because the term is part of the technical vocabulary
of the sciences of genetics, which means that when scientists use the term, they have a very specific meaning in mind, and it's not the layperson's meaning. The technical meaning of 'heritability' is intimately tied up with scientific practice and theory of scientific practice. To really understand what scientists mean when they use the term, we have to understand how it is that they go about their experiments and research when they are attempting to measure the heritability of some trait.
I say that the meaning is tied up, not just with scientific practice, but with theory of scientific practice. What I mean by that has to do with the fact that, at least in theory, ideally, scientists are being extremely careful
about what they can and can't measure, and sometimes the difference is extremely subtle for those of us who aren't scientists. We throw around the terms 'genetic' and 'heritable' interchangeably and very loosely. But what do we think we really mean by them? And more importantly, is what we think we mean even scientifically credible, in the sense of being something that scientists could even demonstrate at all?
Obviously, then, the crucial point here is to understand what
heritability is actually a measurement of
, since that
is what is actually meant
when scientists talk about the heritability of some trait, and thus, that is what scientists claim to have evidence for. But furthermore, a scientist never
means what the layperson means, or what the layperson thinks he means, or what the layperson wants the scientist to mean.
a measurement of is how much genes determine the average of a given trait in a population. For example, the average number of fingers on each hand, or the average distance between the eyes, or the average height, or the average level of intelligence, or the average level of aggression, and so on.
Instead, what heritability is
is a measurement of is how much genes have to do with the degree of variability
of a trait. ( Read more...Collapse )
Finally got around to watching the documentary The Fog Of War, an extended interview with McNamara. Incredibly interesting. But I want only to note here right now that Morris ends the documentary with a phone(?) conversation in which he asks McNamara a few further questions about Vietnam, which McNamara declines to discuss any further, for reasons he can't really disclose, because of things he knows that we don't and because of how what he might say would be taken by particular but unnamed others. But it's his very last sentence that is so deeply telling. And in a most troubling way. Morris asks whether it's a problem of, "Damned if you do, and damned if you don't," and McNamara says that's exactly what it is. And, "I'd rather be damned if I don't."
This is just a repost. Original SourceThe cost of stereotypesBy Margaret Harris
When a 2012 study
showed that scientists subconsciously favour male students over females when assessing their employability as early-career researchers, it generated plenty of debate
– not least among women, who were, according to the study, just as likely to be biased as the men were.
Some of these discussions got rather overheated, but one cogent criticism of the study did emerge. Roughly, it was this: might the scientists’ preference for men over equally well-qualified women be a rational response to the fact that, because of various barriers, women in science often need to be better than their male counterparts in order to have an equal chance of success?
The question was an awkward one, since it implied that women in science could be caught in a vicious circle, with the negative effects of bias in the workplace making it “rational” to be biased in hiring (and, in turn, making such workplace bias more likely to persist). However, a new study appears to rule out this argument by finding similar patterns of hiring bias against women even when the “job” is an arithmetical task that, on average, women and men perform equally well.
In their study
, Ernesto Reuben, Paola Sapienza and Luigi Zingales examined the effects of gender stereotypes in an artificial market where male and female “employers” were presented with male–female pairs of “candidates” and asked who they would hire to complete an arithmetical task. When employers had no information about the candidates other than their appearance, they chose the man 66% of the time (out of 507 male–female pairs). When, in a second experiment, employers also heard the candidates’ self-reported performance on a previous, similar arithmetical task, they still picked the man 66% of the time – even though in around half of the 160 male–female pairs, the woman had outscored the man. When the researchers themselves informed employers about candidates’ past performance, the bias was smaller, but employers still hired the man 57% of the time (out of 265 pairs).
The researchers also showed that in some circumstances, employers’ biased hiring decisions correlated with their pre-existing negative views of women and mathematics, as measured by an Implicit Association Test
(IAT). Male and female employers who held stereotypically negative views about women’s mathematical abilities (IAT scores > 0) were more likely to predict that male candidates would outperform females on the second task if they were given (a) no information or (b) only self-reported information about the candidates’ past performance.
This correlation vanished when past-performance information came from the researchers themselves, suggesting that “stereotypes did not seem to affect [employers’ decisions] when the information was provided by a neutral third party”. However, even with good, neutral information, employers still chose female candidates less often than male ones: in one sub-experiment involving 265 hiring decisions, employers chose a lower-performing male over a higher-performing female a whopping 82.7% of the time.
Based on these results, the researchers concluded that “stereotypes do indeed affect the demand for women in mathematics-related tasks, regardless of quality considerations”.
Picking up where I left off:
The traditional evolutionary model of heritable traits in the context of genetic determinism disallows any form of non-genetic (non-Mendelian) inheritance of a trait. More specifically, it disallows the heritability of epigenetic
changes; or in other words, it disallows epigenetic inheritance.
So, this issue actually came up in that conversation I'd had last year that sparked me to think about all of this and learn some things about it. And what that person said to me was that, we have no evidence whatsoever that epigenetic changes can be passed on, i.e., that epigenetic changes are heritable.That is completely false.
In fact, we've known for some time now that epigenetic changes can be passed on.
(Note: that doesn't mean all
epigenetic changes are passed on. At least some of them can
be passed on. No one would ever claim that all of them can be.)
What this does, of course, is complicate the evolutionary picture a whole lot more. Because we now have lots of different things going on simultaneously that will, in a variety of different ways, affect the evolutionary picture that comes out at the end.
An intertwining theme that will come up over and over again is just how much environment
can affect what goes on.
And then there is the crazy world of something that isn't supposed to be possible: an organism changing its own DNA.
Some examples of heritable epigenetic changes:
One significant example comes from the Dutch Hongerwinter ("Hunger winter").
In the winter of 1944-45, Holland suffered severe starvation for about three months because the Nazis, who had control of Holland at that time, sent all of the food in Holland to Germany, both because the Germans needed food and because they wanted to punish the Dutch. Scientists found that in those individuals who were third trimester fetuses at the time of the Dutch Hongerwinter – those who survived, obviously – there were much higher than average rates of obesity, diabetes, and cardiovascular problems. Such was not seen in those were newborns during the time, nor in those who were first trimester fetuses during the time. This basically led to the discovery that during the latter part of the pregnancy, the fetus is, in a sense, programming its metabolism based on the information it is receiving via nutrients in the mother's blood at that time, and its metabolism is that way for the rest of the person's life. Additionally, as expected, those who were third trimester fetuses during the Hongerwinter were smaller than average when born.
Furthermore! The children of those individuals show similar features – small than average at birth, higher rates of obesity, diabetes, etc. – demonstrating that these epigenetic effects have been passed on
As I've mentioned, there are lab rat lines that have been bred for certain traits. Again, if they've been bred for these traits, then we can infer that these traits must be genetic.
There is a line of rats that have been bred for high anxiety, high stress; overall, their brains tend to be slightly smaller than average, but particularly in a certain brain region that has to do with turning off the stress response. So, baseline stress levels of those rats are much higher than average, and they are exposed to more stress overall because their brains are less effective at turning off the stress response and recovering from stress.
One scientist did an amazing experiment with these rats. She figured out how to perform surgery to transfer fetuses from one rat to another rat and have the fetuses develop normally. Then, she transferred fetuses from high stress rats to low stress rats, so that fetuses from the high stress rat line developed in low stress rats. The result was that those fetuses, after being born, grew up to be low stress, not high stress. In other words, she discovered that the trait of being high stress was not
a genetic trait at all, but an epigenetic trait
for which the epigenetic change occurs during prenatal development! Specifically, the epigenetic change in the fetus is caused by exposure to glucocorticoids (stress hormones) from the mother. ( Read more...Collapse )
Of the three assumptions that Darwinian traditional evolutionary models rely on, I've only addressed two, adaptationism (the adaptationist fallacy) and gradualism. The other assumption is that traits, behaviors included, are heritable
For the traditional evolutionary models, heritability is entirely about genes
, about DNA. These models are explicitly against anything like Lamarckian evolution. Lamarck argued that evolutionary change is a product of things like the efforts of organisms, or experiences of organisms in general. To give an extremely simplistic example: the giraffe got its long neck through generations of giraffes stretching their necks to reach for tree branches to eat from, each generation or so ending up with a slightly longer neck than the previous one. Of course, everyone knows Lamarck was wrong and evolution doesn't happen that way.
The traditional evolutionary models offer the explanation that some behavioral trait was selected for because it was somehow adaptive. So for example, in tournament primate species, it's not uncommon to see adult males killing infants, and since this is a widespread behavior, the traditional models have to claim that that behavior exists in all of those species because it was selected for because it was adaptive in some way. Since the males will only kill infants who are the offspring of another male, who is not a relative, the explanation is, of course, individual selection and kin selection: since they all must be driven to pass on as many copies of their genes as possible, and
to try to pass on more
copies than the next guy, unless he's a close relative, because there is competition everywhere in everything, killing someone else's kids decreases the number of copies of genes that other guy passes on, giving you the opportunity to pass on your genes in place of his. Additionally, a male won't kill some other guy's kids if it so happens that that other male is a close relative. And that behavioral trait, too, the traditional model must claim has been selected for. Obviously.
But think about that very carefully: if you make that claim, then you are assuming
that a behavior like killing some other guy's kids is heritable
, and thus is genetic
. And again, that a behavior like sparing the kids of some other guy if he is a close relative is a heritable trait
, and thus, is genetic
. That is a huge assumption. Especially when you look at how genes and DNA actually work: these are very complex behaviors we are talking about here, so the route from genes to such behaviors would also have to be fairly complex and composed of several steps.
But there is a problem here. The practice of killing infants by male primates of tournament species is not a simple given: it only occurs in certain types of circumstances, and those circumstances are at least as sociological as they are biological. So now, if you insist this is a genetically determined trait, because it's a behavior that has been evolutionarily selected for, and thus is heritable, you have to find some way of making genetic sense of the differences between circumstances in which a male kills an infant and those in which he doesn't. And one of the several factors all of this depends on is the hierarchical rank of the male. Good luck.
Or consider that the usage of tools has been observed in many primate species, with variability in the sophistication of the tools themselves and the method of use. For example: using rocks as hammers, to break open various food sources, such as nuts and oysters. Such a behavior is so widespread, the traditional model would have to claim it is an adaptive trait that was selected for. But since only heritable traits can be selected for, the traditional model has to say that using rocks for breaking things open is a heritable trait, and therefore must be a genetic trait. But does that really make sense? I mean, apply that to humans: the use of bowls to hold food is an extremely widespread behavior amongst humans, so it must have been selected for, which must make it a genetic trait.
Really? Suddenly that starts to sound completely crazy. Or incredibly arbitrary. And you obviously can't attempt to get out of it by trying to claim that humans are somehow different from all other animals so that we're somehow exempt from the same rules of evolutionary analysis, because if there is anything that evolutionary theories since Darwin show us, and insist upon, it is that we are not
different and that we are just as much a part of the animal kingdom as is a moose or a mouse.
You might just think, but what's wrong with the idea that these traits are passed down from one generation to the next by teaching / learning? In other words, the idea that such traits are in some sense cultural
The problem is that the traditional evolutionary models can't really make sense of that. If a behavior is strictly
a product of learning and thus, not inherited, it cannot be understood as being a product of evolution, which means it cannot be understood as being evolutionarily advantageous. You can see how limited the traditional models are here: if it's not genetic, then it cannot really be a product of evolution.
Why? Why does the traditional model require a trait to be genetic in order for it to be evolutionarily passed on? A few different reasons. Because evolution is all about genes: natural selection, remember, isn't about survival of the fittest, but passing on more copies of one's genes. It's what's in the genes that's important. Furthermore, because the traditional models have evolution moving so slowly – gradualism – what is needed for a trait to eventually spread throughout a population is some way for it "to stick" so that it is not lost at any point. Because, as we all know too well, it is incredibly easy for things learned to be lost in the following generations; that fact makes it nearly impossible to imagine strictly learned traits to survive long enough to gain the title of being evolutionarily advantageous. Having the traits written into the genetic material itself is basically the only way the traditional models have for explaining how traits are inherited and spread through populations, or how they disappear if they are disadvantageous, which is equally important to the traditional evolutionary models. Fo again, how can something really be selected against
in the way that the traditional models mean that if the trait is not tied to the genes?
Another reason the traditional models require a trait to be genetic in order to be evolutionarily passed on is that the alternative creeps over into Lamarckian evolution.
Here is a wonderfully interesting example to think about. ( Read more...Collapse )
You know, I'm getting really fucking sick and tired of losing so much of my life to being sick and tired because in one way or another my body is the enemy I have to fight with every single day. If it's not one thing, it's another, and my body simply refuses to respond to any medications for whatever is ailing me on any given day.
What's the point in living if 99% of the time you feel like shit and can't get a damn thing done?
I should mention: this is not a joke: it is an actual recruitment video for the Gabon military.
Very worth listening to this talk. This guy has some very interesting things to say, and has a very interesting perspective on health and being human.
- Tags:adhd, aspergers, biology, depression, health, human relations, mental disorders, mental illness, neuroscience, pain, phil of biology, stressed
- Music:Pink Floyd
The best evidence / demonstration of the presence of male bias in the sciences because of their being, only until very recently, male dominated:
The suspicious lack of studies of homosexual women amongst all the studies of homosexuality in humans.
Apparently the scientific interest in homosexuality in humans = an interest in homosexual men. (With a rare few exceptions to that '='.)
(Apologies for taking awhile to get these posted. I've lately been feeling not so good, interspersed with obsessively working on knitting a sweater that I've been trying to finish, but keep taking apart to change this or that.)
One thing I want to point out and emphasize – something I've already mentioned, but want to say again – is that behaviors
are just as much traits as physiological characteristics are, and the traditional evolutionary models depend on the idea that behaviors are shaped in exactly the same way as any physiological characteristic is, and thus, that behaviors are connected to genes in exactly the same way as any physiological characteristic is. So, for example, the explanation as to why male baboons fight with each other all the time is the same explanation as to why their olfactory sensory system is set up the way it is, and why they have canines the size that they do, and why their overall bone structure is what it is: on the average, it increases how many copies of genes they get to pass on. (Of course, then there's that problem I discussed previously about females actually choosing
to mate with the less aggressive males, and thus explicitly tricking the more aggressive, higher-ranked males in order to mate with those less aggressive, lower-ranked males that are actually nice to the females.)
With that in mind, here is a fascinating finding to think about:
Amongst voles, some types are monogamous and other types are polygamous. What has been found is that a promoter upstream of a gene having to do with the hormone vasopressin – a gene for a receptor – comes in two different versions, and the monogamous male voles have one version, while the polygamous male voles have the other version. The gene itself, however, is the same in both. A difference in the promoter means differences in the context in which the gene is expressed, such as what transcription factor affects the gene, where in the body (in the brain) the gene is expressed, and so on. So a mere difference in the promoter of a gene, but not a difference in the gene itself, gives rise to a radical difference in behaviors. For a species to be monogamous means that that species is a pair-bonding species; and for one to be polygamous is for that species to be a tournament species. And as I've already discussed, there are very significant differences between pair-bonding and tournament species. So, a single difference in a regulatory factor for one gene can play a critical role in a very big set of differences at the macro level.
However! Things quickly get complicated and murky once we look at what is going on in one species in particular. Normally in pair-bonding species, in order for the pair-bond to occur, during and immediately after mating the right things have to be going on with certain hormones and neurotransmitters, levels of some going up at the right times while the levels of others decrease and so on. That difference in the promoter for the vasopressin receptor gene is playing a role here. As you would expect, the specifics of what goes on is different in tournament species. This is a consistent difference. And then we look at bonobos. In bonobos we find exactly the same things going on with hormones and neurotransmitters that go on in pair-bonding (i.e., monogamous) species. But bonobos are as far as you can possibly get from monogamous than any other species on the planet. And yet they are also definitely not a tournament species, given that males and females do not differ in body size, that bonobos are matriarchal, and so on.
What should one take from this? First, the connection between behavioral traits and genes is more complex, since it's not always about the gene itself, but can have something to do with the context
in which the gene is expressed. But second, things still aren't even that clean, since we find anomalous cases.
Recall the different lab rat lines that have been bred for certain features. At least one of the most aggressive lines was found to actually differ in pain sensitivity, so that they were much more sensitive to pain than other rats. Being a lot more aggressive is, if you think about it, a pretty significant difference in behavior, since it's not just about a single behavior or a type of behavior, but about all
behavior. If the difference has to do with how sensitive one is to pain, then that is a difference that can be brought about by mutation in a single gene, or in some regulatory aspect for that gene: mutate the gene for the µ1
-opioid receptor, which is the favoured receptor of β-endorphin, making it non-functional, and that individual has lost the body's best natural pain-killing mechanism, and will thus end up far more sensitive to pain.
So again, much like the examples I gave in the previous entry, it is extremely easy to get significant macro-level changes by mere micro-mutations in DNA.
Now, continuing on as to how it is that the gradualism of the traditional evolutionary models falls apart:
There is a second way in which duplications of genes make gradualism implausible: they can actually allow for much faster evolution
. An extra copy of a gene for a receptor of a hormone, for example, might get mutated enough to code for a receptor with a different shape, but a shape into which nothing fits. Since there's another copy still coding for the right receptor, the mutation has no effect. Then, ten thousand years later, some new mutation codes for a protein that just happens to have the right shape to fit into and bind with that receptor. And suddenly a dramatic change results. In other words, in a metaphorical sense, extra copies of genes and regulatory factors allows the genome to experiment with different mutations without having to change whatever roles those genes and regulatory factors have, without having to lose whatever it is they do; then eventually, some of those "experimental" mutations compliment each other, and suddenly there are significant macro changes. What this would mean is that the "in between stages" we envision occurring in traits, i.e., at the macro level, in a gradualist picture of evolution wouldn't actually be occurring; instead, those "in between stages" would be occurring at the genetic level only, because the mutated genes and regulatory factors are ones that the body doesn't have any way of expressing yet, and since the mutations are occurring in duplicated parts of DNA, nothing is lost, and thus, nothing at the macro-level changes. And then the right mutation occurs that can then "turn on" all those mutations.
This is not simply a theoretical possibility for duplications. Scientists studying the evolution of specific genes, or specific proteins in the body, have determined that some particular gene initially started out as a duplicate copy of another gene that is still present in the genome and still functional, and now that mutated duplicate codes for a functionally different protein.
On the one hand, we could say this is still gradualism, since the gradual changes are occurring at the genetic level. But that sort of gradualism is an entirely different kind of gradualism from that of the traditional evolutionary models, because, most importantly, it could not
save the competition that permeates the traditional models, the competition that is supposed to be driving a lot of the evolution going on. Since such gradual genetic mutations would be having no macro-effects, they could not increase or decrease an individual organism's chances for passing on copies of its genes; in that sense, such mutations would be entirely neutral with respect to evolutionary "advantage" or "disadvantage". But for the traditional models, it is crucial that the "in between stages" are specifically incrementally evolutionarily "advantageous".
Consequently, this function of duplicated genes also explains how traits can arise without being adaptive. ( Read more...Collapse )
Picking up where I left off: how do the traditional evolutionary models fall apart in the face of our current understanding of the molecular-genetic world?
The bottom line is this:
The traditional evolutionary models all rely on a few crucial assumptions, and one of those assumptions is that of gradualism
, that is, the evolutionary process occurs very gradually, very slowly, through tiny bits of random changes in this or that trait amongst members of a species. However, gradualism is inconsistent with what is actually going on in the molecular-genetic world. Part of understanding this is, once again, understanding why it's not right to think of genes as coding for traits, because what genes really code for are proteins
, which have all sorts of functions in the body. The rest of understanding how gradualism is inconsistent with molecular genetics is understanding the details of what is happening at the molecular level, which I went into in the previous entry, and understanding how those details fit together as a whole picture of what is going on in the molecular genetic world, and what the molecular and biological consequences are.
Of course, it's important to be considering what would have to be going on in order for gradualism to be true in order to then see how that's not what is happening. So if gradualism is true, then what we should expect to be happening over the course of evolution are micro-mutations
in the DNA bringing about very slight macro-changes in some trait. Then, if that slight change gives the organism even the tiniest bit of advantage over other members of its species in passing on copies of its genes, then, given a long enough stretch of time, that slight macro-change will have been selected for and eventually all the members of the species will have inherited that micro-mutation in the DNA. By that point, it would no longer be considered a mutation
, but just a "normal" part of the genome of that species.
Explaining how that's not
what is actually going on requires going through some explanatory steps and going into a little more detail about some of the specifics. So bear with me, as there are a lot of pieces to the picture. (As I've already said, things are a whole lot more complicated than you think.)
is understood here to be a point mutation
, of which there are three basic types: replacement, insertion, and deletion. A replacement mutation
is when a single nucleotide is changed to a different nucleotide, having been miscopied or altered by radiation, etc. An insertion mutation
is when a single nucleotide is accidentally repeated when the DNA is copied, and thus causes a frameshift in the opposite direction the DNA is read. A deletion mutation
is when a single nucleotide is accidentally left out when the DNA is copied, and thus causes a frameshift in the same direction the DNA is read.
A replacement may have no effect if the resulting codon it is part of codes for the same amino acid as the un-mutated version. Or, it may have only a slight effect if the resulting codon codes for an amino acid that has very similar molecular properties to the amino acid coded for in the un-mutated version. Here is a linguistic example to help explain and demonstrate this – I didn't personally come up with this example, I'm stealing it from someone else.
(1) 'I will now do this.'
(2) 'I will mow do this.'
We wouldn't have any difficulty understanding what that second sentence is supposed to mean. So in this case, the replacement doesn't really have an effect, because the original message is still readable despite the mutation.
However, sometimes a replacement can have drastic effects, if the resulting codon codes for a different amino acid with entirely different properties from the amino acid coded for in the un-mutated version.
(1) 'I will now do this.'
(3) 'I will not do this.'
Obviously, the third sentence means something entirely different from the first; so, in this replacement mutation, the message has drastically changed, with potentially devastating effects.
(1) 'I will now do this.'
(4) 'I will noo wd othi.'
Since codons have three, and only three, nucleotides, when one nucleotide gets accidentally repeated, it causes what is called a frameshift, which is what is represented as going on in (4). This insertion mutation has now turned the sentence into gibberish.
(1) 'I will now do this.'
(5) 'I will nod ot his.'
Once again, because codons must have three nucleotides, a deletion mutation also causes a frameshift, in the other direction, which again has here turned the sentence into gibberish.
So, point mutations can have no effect, thanks to the redundancy in the genetic coding of amino acids, or a very slight effect that doesn't really cause any macro-changes – that is, macro-changes that affect the organism's survival – or a significant and possibly devastating effect. ( Read more...Collapse )
The question one might be wondering at this point is, if the traditional models of evolution fail to describe and explain what has actually been going on, then, what's the alternative?
The way of getting at some kind of answer to that is by looking in the one place that we think carries
the effects of evolution: genes
Our theory of the nature of DNA and genes and how they function has changed drastically from what it was 40 years ago. DNA and genes have turned out to be very different from what we initially expected. At this moment, there is still a whole lot
that we don't know and don't understand, so the field of genetics is not without some significant debates and disagreements between scientists. That has to underlie any discussion about these topics.
That said, there are a lot of things scientists are
in agreement about. There are also a lot of findings and a lot of data; and while those are, in and of themselves, fairly clear, how we ought to interpret
them is sometimes clear and sometimes not.
Conversationally, as non-scientists, we often talk about genes for
this or that trait, such as having genes for brown or blue eyes, for being tall or short, having brown or blond hair, for being muscular or thin, whatever, just to name a few traits relevant to humans. The problem is, that's incorrect. That sort of understanding of genes comes from the beginnings of Darwinian evolutionary theory and Mendelian genetics.
For Darwinian evolution, for example:
You might be looking at some species of bird and notice that, between two individuals (of the same sex), one has a slightly larger wingspan than the other, despite that the rest of their bodies are fairly equal. You might then conclude that, perhaps, the one with the slightly larger wingspan has some slight mutation in the gene that has to do with their wingspan, and that's probably advantageous, so, if you came back many, many, many generations later, then most or all members of the species would have that slightly larger wingspan. On the other hand, if it was disadvantageous – maybe it's too clumsy – then, many generations later, none of the members of that species would have that large of a wingspan. And, that's how evolution via genes works.
This is the way we were all taught to think of genes in middle or high school, which means that, for most people, it's hard to not
think of genes in that way.
Genes don't code for traits. Period.
What genes do
code for are proteins
. In other words, genes code for microscopic molecules. So, it's in terms of microscopic molecules that you have to think about all of this. Now, I don't know about you, but I'll be honest: it's easy to say
that, but to actually think
in those terms is not at all easy. It's like the difference between having to translate a foreign language in order to read something or have a conversation, and being able to think in that foreign language
so that no translation whatsoever is occurring in your mind.
Why am I blathering on about this? Because as long as you slip even the slightest bit back into that incorrect picture of genes that we were all taught as kids, you put yourself in danger of misunderstanding, and it's a kind of misunderstanding that can easily spread, so that one little misunderstanding at some point has the potential for a sort of ripple effect. It's not that all of this is actually difficult to understand, because it's not. It's more a matter of breaking a habit. ( Read more...Collapse )
An interesting criticism of the traditional evolutionary models came from Soviet scientists. Their criticism was directed at the claim that the driving force behind evolution was competition between individuals
, individual selection, just trying to pass on more copies of your own genes than the next guy (who's not closely related to you). What the Soviets emphasized plays a critical role in evolution is climate
, and being able to survive extreme climate conditions. But what they observed was that, in such conditions, you didn't find the competition that these Western white upper class male scientists were haranguing about. According to their traditional models, what was supposed to be the case in extreme conditions was that, being under greater pressures, competition between individuals, individual selection, should increase. Scarcity of resources, for example, we've all been taught is supposed to cause increased competition. The Soviet scientists observed that the opposite is actually true.
And in fact, further observations confirm this. In times of decreased food supply, not decreased so much that animals are being calorically deprived, but just during times when they have to work a lot harder to get food, the traditional models predict that competitive and aggressive behavior should increase. What in fact has been observed is that during those times, aggressive behavior decreases
! And this is something that has been observed in lions! Furthermore, in times of excess, when food sources are abundant and animals don't have to work much to get their food, aggressive behavior increases! Again, that's the complete opposite of what the traditional models would have you believe. This increase in aggressive behavior during times of food excess is called 'behavioral fat'.
This reminds me of something I observed in my ducks on Waldron, that I'm pretty sure I wrote about here. When all of the females were spending their time sitting on their nests, this left the males on their own 95% of the time. Since the males don't eat nearly as much as the females, because they're not producing eggs, they only spend a small percentage of their time eating. What is there left to do? Well, what are they normally doing all day long? Protecting the females
, or doing this or that for the females. So, without the females around, they've got nothing to do. Which means, basically, they're bored
. Now, when I was observing this, there were still two groups of the ducks, the older bunch and the younger bunch, and at the time I was observing this, there were two older drakes and three younger ones. So what did they spend most of their time doing? Messing with each other! Starting shit with each other! One or two of the younger guys would go over to where the older ones were hanging out and just start some unnecessary "fighting" – it was hardly fighting, not at all like what they would do when they were trying to impress the females. From what I could tell, more often than not, it was the younger drakes who'd go over and mess with the older guys, and occasionally it was the other way around.
Another major thing that the traditional models got very wrong is sex
. ( Read more...Collapse )
One of the ways of giving evidence in favour of kin selection is by demonstrating the fact that most species have ways of detecting relatedness, many of them involving chemical signatures detected in pheromones, but plenty of them involving many other forms of recognition.
Just to give a few examples:
Some research being done with a particular species of monkeys included making recordings of several of the members of the group making different types of calls and voice gestures and whatnot. A speaker was hidden in a bush and the scientists played a recording of one of the very young children making an alarm call. The immediate reaction of all of the adults (minus one) was that they looked at the mother of the infant whose voice was on the recording, demonstrating just how well they can detect relatedness.
A type of scenario that has been observed in primates: one female (A) does something mean to another female (B), and then later in the day, the child of female B does something mean to the child of female A. This requires the children to understand the relatedness between the generations.
I think this next example probably applies to a few different rodent species. Male hamsters are migratory, meaning that, once they mate, they don't stick around. That also means that, usually, when a male comes upon a female with a litter of kids that aren't his own offspring, he will kill and eat them. Unless
they are the offspring of a close relative of his, such as a brother or a close cousin, the relatedness being detected via pheromones.
So one way of arguing for kin selection is to say that these mechanisms that allow individuals to detect others who are closely related is for the purpose of
helping to pass on copies of one's own genes, since close relatives have some degree of genes in common. This helps pass on copies of one's own genes because when an individual detects that someone is a close relative, that detection causes the individual to cooperate
with that other instead of attacking him or her, such as in the case of the male hamster detecting that those kids were fathered by his brother (or other close relative). And this idea of cooperation between kin is also used to explain away any behavior that appears to be "altruistic".
As a theory, is kin selection true or false?
What I want to say is that that's a nonsense question. ( Read more...Collapse )
To start with, I'll relay some examples of ways in which we found out we were wrong about something, or of discovering / realizing something that calls into question conclusions of previous research and findings.
[Just a disclaimer: These are all examples I've gotten secondhand, though I don't see any good reason to question the reliability of the source; however, that certainly doesn't mean this source isn't mistaken about any of these. So, being that I'm not even close to being an expert in this branch of sciences, I can't guarantee that everything here is 100% correct and I can't give you any reason to think that it isn't. I'm just relaying information I got secondhand.]
It had previously been observed that the social rank of a chicken was inherited. I'm not sure if this just had to do with females, and that the social rank was inherited from the mother, or if this was true of both males and females, but it doesn't matter. So, this seemed like a pretty clear instance of the heritability of a social behavioral trait. However, what was later found – I don't know how much later, and I don't know when – is that, what is actually being inherited has something to do with the melanism of the feathers such that it causes the other chickens to peck at the individual a lot, which thereby reduces the individual into submission. So, it wasn't that some social behavioral trait was being inherited, but a visibly perceptible anatomical trait that significantly influenced how other members of the species treated that individual.
It had previously been thought that chicks inherited an instinctive behavior of picking at grubs and such. However, someone eventually figured out that what was actually being inherited was the tendency for the chick to pick at its own toes, thus discovering quite by accident that there were yummy things it could find down there, too. Not sure how it was done, but I guess the scientists somehow covered chicks' feet, and so they didn't display this supposedly inherited instinct to pick at bugs in the ground.
There was an experiment in which scientists stimulated a certain part of a rat's brain and the rat immediately attacked and tore into and killed a mouse (or maybe it was another rat?) that was nearby. The scientists of course would have done this more than just once, and it was the same each time, so they concluded that this part of the brain definitely had something to do with aggression and aggressive behavior. I think that part of the brain was in the hypothalamus, and thus, lots of research and studies and experiments went into studying the hypothalamus under the assumption that it controlled aggression. Eventually someone discovered that it had nothing whatsoever
to do with aggression, but that it had to do with hunger. In the original rat experiment, they misinterpreted the behavior as aggressive behavior when it was actually predatory
behavior driven by hunger. It would be akin to, if you had that part of your brain stimulated, running into the kitchen and literally ripping open a box of cereal or cookies or something because you'd be so overtaken by ravenous hunger.
There was a similar misinterpretation of rat behavior after stimulating a certain part of the brain, and I think that misinterpretation was also that it was aggressive behavior. The behavior was that the rat immediately started shredding everything in the enclosure. As it turned out, that part of the brain actually had something to do with mothering
behavior, and it stimulated the rat to start building a nest. I believe they didn't figure this out until they stimulated that same part of the brain in a monkey, perhaps, and she did something I can't now remember that was very obviously
a mothering behavior.
Somewhat relatedly: it was believed for a very long time that female rat mating behavior was entirely passive, that they basically did nothing, so that only the males played an active role when it came to mating. I believe the scientist who discovered that this is wrong is (was?) a woman
. What was eventually found out is that the passivity of the females was due entirely to their being in small enclosures in labs: if they were put into much larger enclosures and thus given lots of space – or if they were observed in a natural, i.e. non-lab, habitat – they did indeed play an active role, running around and engaging in some kind of courtship ritual, or something like that. (Something that apparently required a lot of space… not sure about the details.)
So, the significance of these examples demonstrates how easy it is to misinterpret and misunderstand observed behavior, thus drawing incorrect and misguided conclusions. And this is what ethology
is all about trying to prevent, ethology being a discipline that tries to study and understand other species "in their own language
", so to speak. It was, for example, an ethologist who first observed bee dancing for what it is, and then figured out the content of the information being communicated, how to translate it. Ethologists pushed the idea that you can't possibly understand another species unless you are observing it in its natural habitat instead of in a laboratory. Now, to us that sounds obvious, but you have to realize that, for along time, to the majority of scientists studying other species, that sounded ridiculous, unnecessary. Look at the history of zoos: look at how long it took people to figure out that keeping animals in entirely cement enclosures was a bad idea. For a long time people believed it made no difference to the animals and certainly no difference on their behavior what the habitat was like.
The significance of these examples is also to show how much what is observed is driven by what scientists already believe. So, here are some more examples. ( Read more...Collapse )