Category Archives: Method of the Month

Method Of The Month: Osmotic Pumps

This post is the sixth in a series that aims to educate readers about the tools that are used in neuroscience research. Previously we discussed Radioactive Binding Assays, Novel Object Recognition, Calcium Imaging and EEG.

Currently I am running a study that examines the effects of chronic nicotine on sensory processing in mice. While I don’t mind coming into the lab on weekends, the prospect of visiting my animals 24/7 to inject nicotine wasn’t exactly practical. I could give daily or twice daily injections, but even that doesn’t really come close to approximating the behavior of human smokers.

alzet-pumps.jpgThat’s why we turned to a company called Alzet that manufactures miniature pumps for drug delivery in laboratory animals. These pumps can deliver small volumes of drug solution at a controlled rate over a period of up to six weeks. We opted for the 2002 model, with a reservoir volume of 0.2 mL and a flow rate of 0.5 µL/hr.

They are pretty easy to use: Just fill up the reservoir with concentrated drug solution and pop on the cap. Then, anesthetize the rodent and cut a small slit in its back using standard aseptic techniques. After opening a small cavity for the pump with a hemostat, insert the device and close the opening using medical staples.

After implanting these pumps subcutaneously, I couldn’t help but wonder if these things were actually going to work. How could a little piece of plastic control the flow of drug solution so precisely? If they worked because of osmotic pressure, then shouldn’t the flow rate depend on the concentration of the dissolved drug? I knew that osmotic pumps were popular, but I couldn’t shake the irrational fear that these $20 devices were just one big scam.

The only way I could settle my nerves was by figuring out how the devices worked. Luckily Alzet’s website is relatively transparent about the mechanism. Now I know that the osmotic pressure difference is actually between the animal’s body and the “salt sleeve” that surrounds the drug reservoir. From their website:

ALZET pumps operate because of an osmotic pressure difference between a compartment within the pump, called the salt sleeve, and the tissue environment in which the pump is implanted. The high osmolality of the salt sleeve causes water to flux into the pump through a semipermeable membrane which forms the outer surface of the pump. As the water enters the salt sleeve, it compresses the flexible reservoir, displacing the test solution from the pump at a controlled, predetermined rate. Because the compressed reservoir cannot be refilled, the pumps are designed for single-use only.

The rate of delivery by an ALZET pump is controlled by the water permeability of the pump’s outer membrane. Thus, the delivery profile of the pump is independent of the drug formulation dispensed.

Pretty nifty, huh? Even niftier is the accompanying animation:


Now I am much more confident that my mice are indeed receiving the expected dose of nicotine. In retrospect my fears were misplaced, but in science it never hurts to be skeptical/cautious.

Method of The Month: Radioactive Binding Assay

This post is the fourth in a series that aims to educate readers about the tools that are used in neuroscience research. Previously we discussed Novel Object Recognition, Calcium Imaging and EEG.

Neurons communicate with one another by releasing small molecules called neurotransmitters that bind to specific receptors on adjacent neurons. The interaction between the receptor and the neurotransmitter (or ligand) is very strong and the two are said to fit together like a lock and key. An interesting feature of the brain is its ability to regulate the number of receptors expressed at the surface of the cell. For instance, exposure to nicotine results in robust increases in the number of receptors at the cell surface. So while initial exposure to nicotine may stimulate the majority of receptors, a subsequent dose of equal size will only stimulate a fraction of the upregulated receptors. A mechanism like this may underlie some features of addiction such as tolerance and withdrawal.

Because neurotransmitter binding to receptors play such a crucial role in brain function, scientists need tools to quantify them. A classic way of doing this is by exposing ground up brain tissue to radiolabeled molecules (or radioligands) that bind to complimentary receptors found in the sample. Because the difference between a radioligand and a normal neurotransmitter is very small (only a single extra neutron) and the radioactive signal can be detected very easily with a scintillation counter, they are ideal probes for quantifying receptor binding. By incubating the brain tissue with the radioligand, washing the reaction mixture to remove any excess and measuring the remaining radioactivity, you can quantify binding. Repeating this assay with multiple concentrations yields a binding curve. Replicating each assay condition 2-3 times and averaging will also improve the quality of data dramatically.

Here is a made-up example of what binding assay data looks like. Our radioligand will be H3-spiperone, which binds strongly to dopamine D2 receptors. On the x-axis we have the various concentrations of radioligand tested, ranging from 0.5 nanomolar to 1 nanomolar. On the y-axis we have the number of binding sites labeled at that concentration, expressed in femtomoles/microgram of protein sample:


But there’s a problem. While the majority of the radioligand will be bound to your receptor of interest, some will bind non-specifically to other proteins in the ground brain sample. In most cases, even multiple washes cannot remove this nonspecific binding. Furthermore, non-specific binding will prevent us from reaching a binding plateau because, unlike specific binding, non-specific binding does not saturate with higher concentrations of radioligand. Looking at our fictional data, we can see that the y-values do not reach an maximum but rather continue to rise. How do we calculate specific binding from this data? By measuring the nonspecific binding and subtracting this from total binding, it is possible to calculate specific binding. In other words, Total Binding – Nonspecific Binding = Specific Binding.

Non-specific binding can be measured by displacing some of the radioligand by incubating with very high concentrations of another molecule that binds to all your receptors of interest. Unlike the first ligand, this one is not radio-labeled. For instance, we might use Sulpriride, another specific ligand for D2 receptors. Those molecules of radioligand that represent D2 receptors should be displaced by Sulpriride. When the brain tissue is washed, the only radioactivity that remains corresponds to the nonspecific binding because it didn’t bind another known ligand for that receptor. Because the unlabeled competitor is present in great excess, it out-competes the radioligand for a limited number of binding sites.

After replicating each assay condition with and without an unlabeled competitor, we can subtract the two curves to find specific binding. Here is what this data might look like:


Binding increases rapidly at the lower concentrations while reaching a plateau at the higher concentrations. This maximum number of binding sites is usually referred to as the Bmax value and it is expressed as moles per microgram of protein sample. Here we can see that binding levels off somewhere between 400 and 500 fmol/ug. The concentration of radioligand at which binding reaches half the Bmax is known as the Kd. The lower the Kd for a given ligand, the higher it’s affinity for the receptor in question; the higher the Kd, the lower the affinity.

Once you have a specific binding curve, all that is left to do is fit the equation using a non-linear regression. If all goes well, the data should fit the following equation:

Y = (Bmax)*(X)/(Kd + X) where X = radioligand concentration

When X is equal to the Kd, the term (X)/(Kd+X) will equal 1/2 and therefore the Y value will be half of the Bmax, as expected. When X becomes very large the same term will approach 1 and the Y value will approach the Bmax value. For our fictional data, the a non-linear regression reveals that the Bmax is 500 fmol/ug and the Kd is approximately 0.1 nanomolar. This makes sense because at 0.1 nM 3H-Spiperone, binding is at 250 fmol/ug.

Once you have the method down (easier said than done), it’s possible to investigate how the receptor’s properties change with different experimental treatments. For instance, the literature reports that chronic treatment with the antipsychotic haloperidol upregulates D2 receptors. This kind of change in the quantity of binding sites could be detected as a change in the Bmax value, as measure with the binding assay discussed above. In contrast, an experimental treatment that altered the affinity of the binding sites for H3-Spiperone would be detected as a change in the Kd value. A shift toward the left would correspond to increased affinity (lower concentration sufficient to occupy half the binding sites) while a shift toward the right would correspond to decreased affinity (higher concentration necessary to occupy half the binding sites).

Interested in learning more about binding assays? Here are some links that might be of interest:

  • Our fictional experiment has been done by real scientists. Find out how their results compare to ours.
  • Here is a paper about upregulation of nicotine receptors that makes use of saturation binding assays.
  • And here’s one about upregulation of D2 receptors.
  • Here is a helpful website for learning about the analysis of radioligand binding data.
  • And here you can find out how to use Microsoft Excel to perform non-linear regressions. I also have a pre-made spreadsheet for this purpose which I can share upon request. (In case you haven’t figured it out yet, most of these Method of the Month features are about techniques that I have or currently do make use of in my labwork.)
  • Millipore is the manufacturer of choice for binding assay equipment. A little on the pricey side, but my guess is that obtaining radioactive compounds is even more prohibitive for the home tinkerer.

Method of the Month: Novel Object Recognition

This post is the third in a series that aims to educate readers about the tools that are used in neuroscience research. Previously we discussed Calcium Imaging and EEG.

This month we will be looking at a behavioral measure of rodent memory that is useful for evaluating the role of experimental manipulations on cognition. Novel Object Recognition (NOR) is based on the premise that rodents will explore a novel object more than a familiar one, but only if they remember the familiar one. This tendency is actually shared by humans, as looking time is often used to make inferences about an infant’s memory in the absence of explicit, verbal recognition.

Before training the animals with objects, they are first allowed acclimate to the testing environment, which is nothing more than a large bin equipped with an overhead camera. After a few acclimation sessions, the animals are ready for the training stage. This stage involves the introduction of two identical objects to the environment before allowing the rodent to explore:

training session

Following the training period, the rodent is removed from the environment for a a delay period which can range from 5 minutes to 24 hours, depending on the type of memory being tested. After the delay, the rodent is returned to the bin, where one of the original objects has been replaced by a new one:

testing session

The amount of time that the rodents spends exploring each object can be calculated by hand or by using a computer program receiving input from the overhead camera. One company that manufactures such software is Clever Systems, Inc. The literature describes a variety of methods for analyzing results. One technique involves dividing the time spent exploring the novel object by the total time spent exploring either object, yielding % Novel exploration. An alternative technique is used to calculate the discrimination ratio, defined as the difference in exploration time for the objects divided by total exploration time. The method of analysis should be suited to the specific experimental setup.

Though simple by design, NOR is actually quite flexible. For instance, changing the duration of the delay period allows one to selectively test short-term or long-term memory. Alternatively, the NOR protocol can be used to selectively test the effects of an acute drug treatment on a specific stage of memory formation. The experimenter can manipulate memory encoding, consolidation or retrieval by injecting the drug prior to the training, delay or testing period, respectively.

Interested in learning more about NOR?

  • I already linked to this paper about alcohol-mediated memory enhancement, but the methods section is worth a second look.
  • NOR is also being used to measure cognitive deficits associated with animal models of Alzheimer’s disease.
  • Here is an example of a looking time study in human infants, though it is primarily concerned with discrimination between possible and impossible objects rather than memory per se.

Method of the Month: EEG

This post is the second in a series that aims to educate readers about the tools that are used in neuroscience research. Last month we discussed Calcium Imaging.

Electroencephalography (EEG) is a powerful technique for assessing brain activity. It has been around much longer than newfangled methods like Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI), but remains relevant today for purposes as diverse as epilepsy treatment, prosthetics, and biofeedback.

Many people have heard of EEG in the context of “brain waves” and the different frequencies of brain activity associated with sleep and wakefulness. These different frequencies can be seen at right and the traces offer a nice glimpse of what raw EEG data looks like. Frequency analyses remain an important part of EEG research, but the technique is also capable of answering more complex questions about the brain’s thoughts and perceptions.

In 1924, the German doctor Hans Berger coined the term electroencephalography after making the first voltage recordings across the human scalp. Berger discovered an 8-12 Hz rhythm that was present when subjects relaxed but disappeared when they opened their eyes, the alpha wave. As years passed and research progressed, the electrophysiological correlates of other cognitive states were also elucidated. Before discussing these advancements, let us examine how the electrical signals are generated in the first place.

neuron-dipole-2.jpgWhen neurotransmitters are released at a synapse, they cause ion channels to open on the postsynaptic terminal of the next neuron. This results in an influx of positive ions that depolarizes the neuron, also known as an excitatory post-synaptic potential (EPSP). The local extracellular environment, depleted of positive ions, takes on a negative charge. As this current propogates down the conductive dendrite of the neuron, the size of the EPSP decreases. This means that the net depletion of positive ions is greatest outside the synapse and smallest outside the cell body, setting up an extracellular voltage difference along the axis of the neuron. This extracellular voltage difference represents the sum of the neuron’s inputs instead of its output (e.g. action potentials). Every neuron receiving synaptic inputs can therefore be thought of as a dipole with a specific orientation and polarity. A dipole corresponding to a single neuron is not detectable with EEG, but when thousands of neurons with similar orientation receive similar synaptic inputs, the dipoles sum together to yield strong voltage signals at the scalp.

Scalp voltages can be measured using a cap studded with electrodes. Sometimes a conductive gel is applied between the scalp and the electrode to improve the signal. The voltage at each electrode is compared to a reference electrode and amplified to increase the signal to noise ratio. Then the signal is sent to a computer where it can be filtered and amplified.

Unfortunately, a given distribution of scalp voltages can be produced by an infinite number of unique dipole arrangements. For instance, two nearby dipoles of opposite orientation might cancel out (this phenomenon is actually quite common because the cortex contains many folds). This makes it very difficult to draw conclusions about the neuroanatomical regions that are involved in a given EEG signal.

Although EEG has much poorer spatial resolution than newer techniques like fMRI, its real-time access to electrical activity provides vastly superior temporal resolution. This property makes it ideal for investigating stereotyped neural responses to stimuli. The corresponding EEG recording is called an Event-Related Potential (ERP). Often recordings from a single stimulus presentation will bear little resemblence to the expected waveform. This is because the responses to many stimulus presentations must be averaged together to yield clean data.

human erp 2

Here is the average ERP waveform in response to sequence of auditory stimuli. The majority of stimuli are identical, while occasionally the subject is presented with a deviant stimulus of a different tone. The gray trace corresponds to the average response to standard stimuli whereas the black trace corresponds to the average response to novel stimuli. It is marked by an increased positive potential at around 300 milliseconds, termed the P300. Because EEG signals are sensitive to factors like uncertainty and probability, they are thought to contain information about higher order cognition.

ERPs are also useful in a clinical setting. Visual evoked potentials can be collected in response to a flashing checkerboard to examine whether a patient’s visual cortex is functioning properly. EEG recordings are also useful for locating epileptic seizures in the brain. Sometimes surgeons will even implant electrodes beneath the scalp or inside the brain to get a more accurate picture of local brain activity. Incidentally, this was how Halle Berry neurons were first discovered.

Interested in learning more about EEG?

  • Open EEG is an online community for electronic hobbyists interested in building a homemade setup. They provide the plans and advertise that a complete system can be built for around $250 dollars.
  • Because EEG has high temporal resolution and fMRI has high spatial resolution, scientists are increasingly interested in combining the techniques. You can find a summary of this approach here.
  • Recently the theory of mirror neuron dysfunction in autism has received a lot of hype. EEG recordings provided the first evidence for this theory.
  • Researchers are also hoping to control prosthetic limbs with EEG.
  • Lastly, EEG is increasing in popularity as a tool for biofeedback. Here is a news clip about the fad.

Method of the Month: Calcium Imaging

This post is the first in a series that aims to educate readers about the tools that are used in neuroscience research.

Decades of neurobiology research have shown that calcium ions (Ca2+) are crucial second mesengers in neurons. For instance, they regulate gene expression, bring about neurotransmitter release and facilitate synaptic plasticity. Ca2+ ions can enter the cytoplasm of a neuron from two main sources: the extracellular environment and intracellular stores.

ca2-imaging.jpgThe first route is mediated primarily by voltage-gated calcium channels (“C” in diagram). When a neuron becomes active, its membrane is depolarized and this allows Ca2+ ions to enter the cytoplasm.

The second route is mediated by Ca2+ channels on the endoplasmic reticulum. When Ca2+ ions enter the cytoplasm via voltage-gated calcium channels, SERCA proteins pump them into the endoplasmic reticulum at high concentrations. This intracellular store of Ca2+ can be released when G-protein coupled receptors on the cell surface produce inositol triphosphate (IP3) via the action of Phospholipase C (PLC). IP3 stimulates calcium channels called IP3 receptors (“IP3R” in diagram) on the endoplasmic reticulum, which raise the concentration of Ca2+ ions in the cytoplasm dramatically.

Altered intracellular Ca2+ signaling has been implicated in a variety of disease states such as schizophrenia, Alzheimer’s and Huntington’s. In order to understand these diseases, it is important to be able to visualize the flux of Ca2+ ions within a neuron. Calcium imaging makes this goal a feasible one by allowing Ca2+ concentration to be detected as changes in fluorescence.

A molecule is called fluorescent if it absorbs light at a different wavelength than it emits light. If you shine one color light at a fluorophore, it will emit a different color, usually with a longer wavelength. The specific wavelengths at which a fluorophore absorbs and emits light are highly sensitive to the molecule’s structure. Because molecules often undergo conformational changes when they bind another chemical, this binding event also has the potential to change the properties of the fluorophore.

In 1985, Roger Tsien’s group at Berkeley was trying to chemically link a molecule that could bind Ca2+ ions to a molecule with fluorescent properties. They hoped that the resulting molecule would have different fluorescent properties depending on whether Ca2+ was bound or not. One of the compounds that they synthesized, called fura-2, is still very popular today.


The figure to the right is copied from Roger Tsien’s 1985 paper and it depicts the how fura-2’s excitation spectrum changes in the presence or absence of Ca2+ ions. Regardless of whether fura-2 binds Ca2+, it emits light at ~510 nm. However, the wavelength at which it absorbs light is dependent on whether Ca2+ is bound. In the absence of Ca2+, fura-2 is excited by 360 nm light; when saturated with Ca2+ ions, fura-2 is excited by 330 nm light. Therefore, if you compare the intensity of 510 nm light that is emitted when you shine 360 nm light on your biological sample to the intensity of 510 nm light that is emitted when you shine 330 nm light on your sample, you can calculate the concentration of Ca2+ ions. Using high-resolution microscopes, it is even possible to localize the changes in fluorescence within a single neuron.

Here is a beautiful video that was made using using Ca2+-sensitive dyes. It shows how Ca2+ release from the endoplasmic reticulum propogates down the dendrites of a neuron toward the cell body in a wave-like fashion. It also shows how these waves interact at a network level:

Interested in learning more about Ca2+ imaging? Here are some recent papers that use the technique:

  • Hagenston et al. examine how Ca2+ waves alter the membrane excitability of cortical neurons. Full disclosure: I used to work in this lab.
  • Tang et al. show how calcium signaling is involved in Huntington’s disease.
  • Jin et al. demonstrate a novel form of long-term depression (LTD) that involves calcium signaling.