Monday, April 15, 2019

Johnson, S. (n.d.). Cooked Chicken Breast [Cooked chicken breast on a plate with rosemary.]. Retrieved from https://www.flickr.com/photos/artbystevejohnson/
The conditions of metabolic disorders, obesity, and diabetes have substantially increased due to the changes of eating habits in Western cultures. As agricultural science has become more advanced the health violations that are associated with it are not as closely monitored. For example, there hasn’t been long term research on the effects of GMO induced foods because this discovery is fairly new to the food industry. The nutritional standards have shifted and in turn, have resulted in mental health disorders and other health related issues. Unhealthy diets contribute to more than just weight and heart issues. The article by Melo, Santos, and Ferreira, suggests that neuroinflammation is one of the main features of brain disorders that are linked to an unhealthy diet. Throughout the blog, there are a number of reviews that support the findings of the link between diet and mental illnesses, while highlighting the specific causes as well.
The transitional process of agricultural practices has been changed in order to produce food fast for more consumption rather than embracing its qualitative factors. In other words, the mass production of food effects the quality of accessible nutrition for us. This fast-paced mindset has made places like fast food restaurants popular and affordable but lessens the intake of foods that are actually good for you. In America there is a huge disparity between natural grown food and what ends up on our dinner plates. I want you to imagine chicken for example. These chickens are mass inseminated in farms thousands of miles away from our dinner tables. They are shot with hormones that make their bodies grow bigger and reproduce faster. Again, we are not sure the long-term effects of this, ultimately, they end up on our dinner tables. It also makes the consumer crave larger portions of food. Sugar consumption, intake of carbohydrates and the overuse of protein “is a key driver of the modern pandemic of obesity and metabolic conditions” (Melo et. Al, 2019). The article then goes on to address the concept of nutritional psychiatry and what it actually means. Nutritional psychiatry is a term that refers to the impact of certain nutrients in someone’s diet and how it coincides with psychiatric conditions. In other words, comparing provisions with behavior that are both uncontrollable.  
When finding one of the causes of obesity, it has been shown that fatty acids have a huge impact on that because of its toxicity. Other dietary issues cause metabolic syndrome, type 2 diabetes, and brain dysfunction. For the rest of the article, it talked about the role that fatty acids plays to mental health issues, specifically depression. Though it is believed that having a balanced, healthy meal makes you happier, in a study conducted in 2014, it is shown that “healthy diets, including a high intake of fruit, vegetables, fish and whole grains, were inversely correlated with depression” (Melo et. Al, 2019). The authors also found another study that found similar results as well. Between all the RCTs (randomized controlled trails), the association between what people eat and the awareness of the mental and physical is significantly decreased in the Mediterranean diet. 
Because of the way our bodies are made, dealing with just one food type with no side effects is almost impossible. However, the multiple meta-analyses studies have come up with way that can be a therapeutic target for diseases which is reducing levels of polyunsaturated fatty acids. The following section talks about the science behind saturated fatty acids, neuroinflammation, and possible links to mood disorders. There is a cell called the microglial cells, and what they do is “respond rapidly to pathological changes in the brain, altering their morphology and phagocytic behavior, and increasing cytotoxic responses by secreting NO, proteases and cytokines, such as TNF-α and IL-1β” (Melo et. Al, 2019). All that means is that the cells alters the brain, which contributes to mood disorders. Inflammation has also become an important factor for mood disorders. Some of the studies conducted where on mice and the research showed that “Increased consumption of high fat diet is related to depressive-like behavior and emotional disorders in mice” (Melo et. Al, 2019). Now, the next section in the article talks about polyunsaturated fatty acids, neuroinflammation, and the links to mood disorders. The nutritional transition that was discussed in the beginning has made high amounts of SFAs and PUFAs popular through dairy products, vegetable oils, and red meats. Docosahexaenoic acids and arachidonic acid are both major components in brain cells. These acids act as structural components. 
In a study conducted by Kleniridders et. Al, it showed that “reduced insulin signaling in the brain, as a result of insulin resistance, led to increased levels of monoamine oxidases and increased dopamine clearance. They further showed that this change in dopamine metabolism led to age-related anxiety and depressive-like behavior in mice, results consistent with the above mentioned increasingly important role of dopamine signaling in mood disorders” (Melo et. Al, 2019). The results that they found basically led them to the conclusion that the signal of dopamine can be altered and that is another factor that contributes to mood disorders.
At the end of the article, the conclusion that the authors come up with were that with the failure of antidepressant therapies, the idea of having dietary interventions is not a bad one at all. In order to reach this goal, it is important to tailor the patients diets to help protect and strengthen their mental heal is important and the food consumed by humans so be further examined to keep up with the results. 
By DeAundrae Ballard, University of Florida  
                                                
                                                    References 

 Helen M. Melo, Luís Eduardo Santos, & Sergio T. Ferreira. (2019). Diet-Derived Fatty Acids, Brain Inflammation, and Mental Health. Frontiers in Neuroscience. https://doi.org/10.3389/fnins.2019.00265 




The Cost of Our Education is Not Matching Our Training







Have you ever wondered if your doctor was competent enough to take care of you? Well if the curriculum in medical school increases, you’d never have to worry again!  

Hello all! My name is Britney Young and I am a first-year Biology major on the pre-med track here at the University of Florida. For a long time, I knew in my heart that I wanted to be a doctor, so I have always looked up medical schools trying to see which ones I would like to attend. From all of my research that I did before coming to college, I realized how costly medical school actually is. This topic is important to me personally and to many people around the world that want to go to medical school and have questions about cost and education, they’ll be receiving. It is important because most of their life will be spent going through medical school. 

It is clear that the majority is aware that medical school is extremely expensive. Students have to take out loans in order to attend and follow their dreams. You’d think with the cost of medical school these students would be getting the best training and education there is; however, there is much to be done to the curriculum. The only problem is, an increase in training would call for an increase in cost and this is not ideal for most students. I recently read an article called “Weighing the cost of educational inflation in undergraduate medical education”(2016), which essentially discussed the amount of medical school and the things that will have to happen to give a better curriculum.  

Alterations in the educational training students receive will likely come from those inside the medical school itself. As mentioned in the text, “These course chairs are typically passionate about their subspecialty content and ensuring that learners know the latest developments in this field, which creates pressure to add new teaching content and evaluation items each year.” (p.790). If there is a change that needs to happen it will need to be from the inside. The text reveals, “For example, for the 2015 version of the Medical Council of Canada Qualifying Examination (MCCQE) Part I the scoring scale was revised and a new pass score approved by the Central Examination Committee that represented an increase in pass score of 12.8 % (increase from 390 to 440 on the former scale)”(p.790). The change in the curriculum would help students be more successful in the field. 

The increase in training can lead to the enlargement of the cost to attend medical school and get your education.  According to the text, "And, if we require additional resources to meet changing training requirements, will this increase the cost of training, add to student debt burden, and make medical school training too expensive for some students?"(p.793) Students are already on the verge of being in debt because of student loans they had to take out in undergraduate school and this occurrence would add more stress to the students. Based upon recent data from the Association of Faculties of Medicine of Canada (AFMC), the current average debt of graduating medical students in Canada is in excess of CDN$160,000 (p.793). There needs to be a way to increase educational while avoiding the increase of the cost. This change would help many students succeed and it would also ensure better healthcare workers. 

Diastolic Heart Failure Vs. Ejection Fraction Failure and How to Correctly Diagnose

Figure 1. This image of a heart relates to the topic of the literature review, heart failure. In the image, one can see the area where blood enters (the blue) and where blood exists (the red). Moderator, N. (2006). PurposeGames [Online image]. Retrieved April 13, (2019), from: https://www.purposegames.com/game/the-heart-quiz

The human heart is one of the most important and beautiful organs in the human body. While being the size of an individual’s fully-grown fist, the human heart is engineered to perfection with one task and one task only… to pump oxygen-rich blood to the rest of the body. This masterpiece, however, is not invincible. Depending on one’s genetics, eating habits or age, the heart could develop a defect. Heart disease/failure is the “Leading cause of death” in the United States (CDC Data for the U.S). The most common diagnosis for heart disease/failure is “Coronary heart disease” according to the NHS (National Health Service). However, there are many different categories in the field of heart diseases such as Congenital heart disease, arrhythmia heart disease, diastolic heart failure, and ejection fraction heart failure. For this review, we will focus on two specific illnesses: diastolic heart failure, and ejection fraction heart failure. 
The beating vessel in our chest is made up of four main components: the right atrium and the right ventricle followed by the left atrium and left ventricle. The right atrium is responsible for pumping deoxygenated blood into the right ventricle of the heart. This section of the heart (the right ventricle) pumps blood through your pulmonary veins which leads to your lungs, supplying fresh oxygen to your blood. From there, the blood flows through your pulmonary arteries and into your left ventricle, and from there it flows into the left atrium and into your aorta, supplying the oxygenated blood back into your body. However, for this review, we will be focusing on two main components of the heart, the right, and the left ventricle. When an individual faces diastolic heart failure, one or both of their ventricles become stiff or harder than they need to be, restricting the amount of deoxygenated and oxygenated blood to be pumped to the heart or body. Therefore, an individual may experience symptoms that include shortness of breath, constant fatigue, swelling of ankles, legs, feet, etc. Your ventricular heart muscles thicken because your heart is working twice or three times as hard to pump blood to the left atrium and to the aorta.
In the event of ejection fraction failure, it is only the left ventricle that dilates and struggles to relax when receiving oxygenated blood. Ejection fraction failure is dangerous because it is your left ventricle that is responsible for pumping oxygenated blood to the rest of the body, if it cannot do that then you are starving your muscles and other important organs of oxygen-rich blood, causing them to fail and eventually shut down.
In order to diagnose someone correctly with diastolic heart failure, one must follow three main criterions. “(1) show symptoms and signs consistent with heart failure (including dyspnea); (2) have a nondilated left ventricle with preserved ejection fraction ≥50%; and (3) display evidence of structural heart disease, such as evidence of diastolic dysfunction on echocardiography.” (The American Journal of Medicine). Making sure that a patient shows obvious signs of dyspnea, which means difficulty breathing or shortness of breath, that they display a regular, nondilated left ventricle, meaning that their left ventricle is not working harder than it should when pumping blood and three; to show difficulty relaxing when heart chambers fill with blood (diastolic dysfunction). These are three important, life-changing steps that if followed correctly could properly diagnose someone with diastolic heart failure. The left ventricle and the blood percentage when filling the left ventricle is what determines whether or not an individual is facing ejection fraction heart failure or diastolic heart failure.
Now when diagnosing a patient with ejection fraction heart failure, a physician must look for similar symptoms of a patient with diastolic heart failure. They must show signs of fatigue, shortness of breath, etc. However, the main difference would be the dilation of one's left ventricle and the amount of blood being pumped out lower than 40%. The dilation of one's left ventricle tells us that there is very little relaxation taking place when the left ventricle relaxes to fill with blood; because of this, the ventricle will dilate (grow in size).
Both diastolic and ejection fraction failure have similar symptoms such as shortness of breath, chest pain, and constant fatigue just to name a few. With both ailments so similar, it is easy to blur the line between the two.  So, if a doctor were to passively look at reports, a misdiagnosis may occur, resulting in the use of unnecessary materials, costing the hospital and the patient money and possibly their life.  However, for every problem, there is a solution. If we were to follow and check the criteria list thoroughly, following the three basic rules when diagnosing someone with heart disease/failure, correct diagnosis rates would skyrocket, limiting the amount of misdiagnosis, which would in return, lower the rates of heart-related deaths. Furthermore, hospitals would save time and resources while saving the lives of people by applying a more effective treatment the first time, providing patients with a speedy recovery. The human heart is one of the most important and beautiful organs in the human body. While being the size of an individual’s fully-grown fist, the human heart is engineered to perfection with one task and one task only… to pump oxygen-rich blood to the rest of the body. This masterpiece, however, is not invincible. Depending on one’s genetics, eating habits or age, the heart could develop a defect. Heart disease/failure is the “Leading cause of death” in the United States (CDC Data for the U.S). The most common diagnosis for heart disease/failure is “Coronary heart disease” according to the NHS (National Health Service). However, there are many different categories in the field of heart diseases such as Congenital heart disease, arrhythmia heart disease, diastolic heart failure, and ejection fraction heart failure. For this review, we will focus on two specific illnesses: diastolic heart failure, and ejection fraction heart failure.  
The beating vessel in our chest is made up of four main components: the right atrium and the right ventricle followed by the left atrium and left ventricle. The right atrium is responsible for pumping deoxygenated blood into the right ventricle of the heart. This section of the heart (the right ventricle) pumps blood through your pulmonary veins which leads to your lungs, supplying fresh oxygen to your blood. From there, the blood flows through your pulmonary arteries and into your left ventricle, and from there it flows into the left atrium and into your aorta, supplying the oxygenated blood back into your body. However, for this review, we will be focusing on two main components of the heart, the right ventrical, and the left ventricle. When an individual faces diastolic heart failure, one or both of their ventricles become stiff or harder than they need to be, restricting the amount of deoxygenated and oxygenated blood to be pumped to the heart or body. Therefore, an individual may experience symptoms that include shortness of breath, constant fatigue, swelling of ankles, legs, feet, etc. Your ventricular heart muscles thicken because your heart is working twice or three times as hard to pump blood to the left atrium and to the aorta.
In the event of ejection fraction failure, it is only the left ventricle that dilates and struggles to relax when receiving oxygenated blood. Ejection fraction failure is dangerous because it is your left ventricle that is responsible for pumping oxygenated blood to the rest of the body if it cannot do that then you are starving your muscles and other important organs of oxygen-rich blood, causing them to fail and eventually shut down.
In order to diagnose someone correctly with diastolic heart failure, one must follow three main criterions. “(1) show symptoms and signs consistent with heart failure (including dyspnea); (2) have a nondilated left ventricle with preserved ejection fraction ≥50%; and (3) display evidence of structural heart disease, such as evidence of diastolic dysfunction on echocardiography.” (The American Journal of Medicine). Making sure that a patient shows obvious signs of dyspnea, which means difficulty breathing or shortness of breath, that they display a regular, nondilated left ventricle, meaning that their left ventricle is not working harder than it should when pumping blood and three; to show difficulty relaxing when heart chambers fill with blood (diastolic dysfunction). These are three important, life-changing steps that if followed correctly could properly diagnose someone with diastolic heart failure. The left ventricle and the blood percentage when filling the left ventricle is what determines whether or not an individual is facing ejection fraction heart failure or diastolic heart failure.
Now when diagnosing a patient with ejection fraction heart failure, a physician must look for similar symptoms of a patient with diastolic heart failure. They must show signs of fatigue, shortness of breath, etc. However, the main difference would be the dilation of one's left ventricle and the amount of blood being pumped out lower than 40%. The dilation of one's left ventricle tells us that there is very little relaxation taking place when the left ventricle relaxes to fill with blood; because of this, the ventricle will dilate (grow in size).
Both diastolic and ejection fraction failure have similar symptoms such as shortness of breath, chest pain, and constant fatigue just to name a few. With both ailments so similar, it is easy to blur the line between the two.  So, if a doctor were to passively look at reports, a misdiagnosis may occur, resulting in the use of unnecessary materials, costing the hospital and the patient money and possibly their life.  However, for every problem, there is a solution. If we were to follow and check the criteria list thoroughly, following the three basic rules when diagnosing someone with heart disease/failure, correct diagnosis rates would skyrocket, limiting the amount of misdiagnosis, which would in return, lower the rates of heart-related deaths. Furthermore, hospitals would save time and resources while saving the lives of people by applying a more effective treatment the first time, providing patients with a speedy recovery.

By: Joshua Modeste, University of Florida


References

  
CDC, NCHS. Underlying Cause of Death 1999-2013 on CDC WONDER Online
 Database, released 2015. Data are from the Multiple Cause of Death Files, 1999-2013, as compiled from data provided by the 57 vital statistics jurisdictions through the Vital Statistics Cooperative Program. Accessed Feb. 3, 2015.Argulian, E., & Messerli, F. H. (n.d.). Misconceptions and Facts About “Diastolic” Heart Failure.


Argulian, E., & Messerli, F. H. (2014). Review: Misconceptions and Facts About ‘Diastolic’
 Heart Failure. The American Journal of Medicine, 127, 1144–1147. https://doi.org/10.1016/j.amjmed.2014.06.010

Sunday, April 14, 2019

Gerrad Hardy P vs NP


P vs NP

Figure 1: P ≠ NP. Reprinted from flikr.com, by J. Kaláb, 2010,
Retrieved from https://www.flickr.com/photos/pitel/4900893832/in/photolist-8t5mZw-6hHhF4-czRVuY-ejA75v-dgzQkv/Copyright [2010] by Jan Kaláb
               P vs NP is one of the seven millennium problems posed by the Clay Mathematics Institute in 2000. The article I am reviewing is the definition of this problem written by the first man to precisely define the problem, Stephen Cook. The article is broken into three parts: the first being about the problem and very clearly defining it using a combination of math and logic statements, the second focusing on the history and importance of the problem, and finally the third being about different conjectures (a conclusion based off of incomplete information) and failed attempts to prove it. In this post, I aim to help you understand the problem without diving into the math/logic used in the formal definition, and then to share with you the importance of this problem and why it was chosen to be one of the seven millennium problems.
               Before diving into the definition some things need to be explained to prepare you to be able to understand the problem and its importance. Those things are what exactly is a millennium problem and time complexity which is the idea the problem is centered on. As mentioned above this P vs NP problem is one of the seven millennium problems proposed by the Clay Mathematics Institute on May 24th, 2000. These seven problems were some of the most difficult and important problems at the turn of the century and as such the Clay Mathematics Institute put out an offer of one million dollars to anyone who could solve one of them. As of today, 19 years later, only one of the seven has been solved (The Poincaré Conjecture). These problems are very difficult to even understand, as their definitions require a lot of specialized and advance knowledge in their respective fields. NP vs P, however, is considered one of the easiest to understand as it is easy to relate to common problems and task.
               While I could just use real-world examples to explain P vs NP, my explanation will be lacking without an understanding of time complexity, the central idea the problem is based off. Time complexity in computer science describes the amount of time it takes to run an algorithm. An algorithm is defined as a process or set of rules to be followed in calculations or other problem-solving operations. As the problem grows so does the complexity but essentially time complexity is how long a computer will take to solve a problem following the direction of a given algorithm. Time complexity can be measured in multiple ways, but the way computer scientist tends to care most about is the worst-case time complexity. So, if I ask a computer to do something like add a list of numbers together what is the worst-case scenario? What is the longest it would take to solve that problem? Problems can be mapped by specific function types like a polynomial or exponential function but what is important to understand is time complexity is essentially how long a problem will take to solve as it gets bigger and more intensive.
                Now that we have that all out of the way I can move onto the problem of P vs NP. Problems can be split into different levels of complexity. As you go up in levels the problems take longer and get harder to solve by computers until you have problems that can never be solved no matter how much time or what computer is used (i.e. the halting problem). P problems are problems that are easy to find a solution and check that it is right. NP problems are a level above P problems where the solution to a problem is hard to find an answer but if an answer is provided you could go and check if it was correct. P problems are easier problems for computers to solve and as the values used increases it stays pretty simple for a computer to solve. NP problems, however, are not easy to solve in the sense that as the values used in the problem increases, things quickly begin to get out of hand and require large amounts of time in order for the computer to solve. This goes back to how complexity can be mapped by a function if you want to dive more in depth about the differences but for now, I will leave it at that.
When I first had that explained to me, I still found it hard to follow. It wasn’t until I had these ideas explained to me using real-world examples that it really started to connect. Some examples of a P problem would be addition, multiplication, or solving a Rubik’s cube. Focusing in on that last one because everyone knows how difficult a Rubik’s cube can be and it is easy to visualize. Well, there are a set of rules you can use to solve which results in a completed Rubik’s cube no matter the orientation. For a human doing this on a 3x3x3 cube can take a while but for a computer following these steps are just simple calculations. Let scale up the Rubik’s cube to say 100x100x100 now. To a human that would seem impossible, but for our computer following a set of rules (the algorithm) it doesn’t really matter what size the cube is. Modern computers can handle and work with enormous numbers and the act of running computations using those number just doesn’t get harder faster than the rate that computers performance has been increasing.
NP does get harder fast. As the numbers used in NP get larger so does the difficulty and time needed by computers to solve these problems. Sticking with the well-known puzzle theme, Sudoku is an example of an NP problem. If you ask a computer to solve a classic 9x9 sudoku grid it could do it quick enough that you may not think it was difficult but what if we made it bigger? What if we asked it to complete a 100x100 sudoku grid? As the grid grows the difficulty of the problem grows out of hand very quickly even for the strongest computers. This is because the rules the computer uses to solve sudoku are not as efficient as the one used on the Rubik’s cube; the algorithm is more complex. If I gave the computer a solution to the 100x100 grid however it would be able to tell quickly if it was correct or not. This is the problem with NP is there just isn’t a quick, efficient means of solving the problems.
The problem asked by Stephen Cook in his P vs NP paper is does P = NP or does P ≠ NP? What this means is are all NP problems just P problems that we haven’t figured out the best way to solve them or are there just some problems that will always be NP. Whether P or NP is determined using the function that maps the complexity but in essence, it is saying that these really difficult problems may actually not be so difficult, we just haven’t figured out how or there really is no easy way to solve them. Most computer scientists believe P ≠ NP simply because of the consequences of P = NP.
Figure 2: P = NP taken  Reprinted from flikr.com, by L. Domnitser2009, 
Retrieved from https://www.flickr.com/photos/ldrhcp/3470903457/in/photolist-8t5mZw-6hHhF4-czRVuY-ejA75v-dgzQkv/Copyright [2009] by Leonid Domnitser
So, why is this important whether a computer takes longer on some problems than others? Why was this posed as one of the biggest problems of the century? Well, that is because some of the problems in NP have world changing effects if they were easier to solve. One example is protein folding which is the process in which protein chains assume their three-dimensional shapes allowing them to complete their task. If we can understand and replicate protein folding easily humans could use it to combat or cure cancer and any other disease. Another example of an NP problem that world-changing results are public encryption keys used by banks, governments, or any other group that wants to keep information private. These encryption keys are based off NP problems so if there is an easy way to solve NP problems then figuring out the key wouldn’t be difficult. Our whole online security system would become insecure.
If P=NP, the world would have miracle answers to a lot of pressing issues almost overnight. If P≠NP, then that knowledge would spur computer scientist to overcome the complexity gap in other innovative ways such as machines able to handle NP problems efficiently. Either way, a proof of either would have a huge impact on computer science and would catapult the author of said proof into international fame in the academic world.

References:

Cook, S. (2000, May 24). THE P VERSUS NP PROBLEM [PDF]. Peterborough,NH: Clay                                 Mathematics Institute.
Domnitser, L.  (2009).Strange Graffiti at the Engineering Building [Online image]. 
             Retrieved April 14, 2019 from 
             https://www.flickr.com/photos/ldrhcp/3470903457/in/photolist-8t5mZw-6hHhF4-                                 czRVuY-ejA75v-dgzQkv/
Kaláb, J.  (2010). P != NP [Online image]. 
             Retrieved April 14, 2019 from
             https://www.flickr.com/photos/pitel/4900893832/in/photolist-8t5mZw-6hHhF4-                                   czRVuY-ejA75v-dgzQkv/

Nico Chaparro - The Future of the Future


With technology getting better and better every day, and electronics getting more powerful in smaller sizes, we have to hit a point where we cannot possibly get any better right? The latest MacBook Air is already so thin and light, yet it is more powerful than the room-sized computer that was the first computer to be invented. The MacBook Air obviously does not require as much power as however much power a room-sized computer requires. How are electronics improving so much, and when will we hit a brick wall? Superconductors are a significant part of that explanation.
Moore’s Law states (in simple terms) that every two years electronics will get twice as powerful. This is because companies manage to fit more transistors (an item in electronics that is on the nanometer scale) in a tiny space, but soon enough we will reach a point where we simply cannot shrink their size as we reach the atomic level. So, it seems that once we cannot keep improving electronics in this way, as we have for so many years, we will reach our limit. However, superconductors can help us make electronics more powerful in a slightly different fashion.
A superconductor is a material (metal or otherwise) that has practically zero electrical losses when carrying a current (power). This, of course, is very desirable, because less power losses leads to more efficient electronics that require less power. If electronics were made with superconducting wires, they would be extremely efficient compared to electronics today. Who doesn’t want a smartphone with double the battery life as their current smartphone in the same form factor? All of our current electronics and their wires lose power constantly, but a superconductor can carry a current for practically forever. They lose a very small percentage of power over the span of multiple years! This is a drastic improvement over modern wires.


 
Figure 1: An Older Computer by IBM
Jenkins, A. (Photographer). (2010, August 8). IBM JX. Retrieved from https://www.flickr.com/search/?text=first%20computer&license=2%2C3%2C4%2C5%2C6%2C9


Superconductors can be useful in much more than just wires, though. They can be used as magnets as well in other ways. A very interesting trait of superconducting magnets is that they can levitate below a magnet through attraction to the top magnet, rather than simply levitating over a magnet through repulsion away from the lower magnet (as current magnets do). MAGLEV (magnetic levitation) trains are very fast, highly efficient, and work through magnets levitating from repulsion. Imagine what crazy inventions we might get when engineers find a use for this superconducting levitation below a magnet. Some possible applications of superconductors include more efficient and powerful computers, better MAGLEV trains, and better boat engines (Lundy, Swartzendruber, Bennett, 1989). Naturally, you have to wonder why we are not using superconductors universally right now. That’s where it gets a bit tricky.
Common metals used as wires or in electronics, such as copper, can become superconductors at certain temperatures, because of some complex chemistry and physics. However, those temperatures are extremely cold, and they are far from room temperature. For example, mercury wire is a superconductor but at -452F. This critical temperature changes depending on the material. A rare-earth material called YBCO was discovered in 1987 to be a superconductor at -292F. This is a much better temperature than mercury wire but still a very cold critical temperature. One of the least cold critical temperatures of -231F was found in a non-rare earth compound in 1988, and this compound was more stable as well. These extremely low temperatures are achieved using liquid nitrogen, which is cold enough for these tests to be done. As a result, more testing needs to be done to find materials that are superconductors at more reasonable temperatures to be used in our day to day electronics. Temperature, though, is not the only obstacle in bringing superconductors into modern devices (Lundy, Swartzendruber, Bennett, 1989).
Ceramics are the superconductors with the warmest known critical temperatures (around -200F), but they are brittle. Even if they did have critical temperatures around 80F, they would need to be strong enough to use in our devices. One of the factors in the strength of a ceramic superconductor is its shape. The problem lies in finding shapes for the ceramics that will allow them to be useful in devices while also making them strong. This is not so easy when circuits are generally small, and the ceramics have to be small as well to fit in them. Temperature and material strength are two of the most significant issues with creating usable superconductors (Lundy, Swartzendruber, Bennett, 1989).
So, if and when we do manage to make room temperature superconductors a reality, what can we expect? Some uses are up to 40% lifetime savings in huge generators, which is a substantial economic saving, even better, more powerful MacBooks and iPhones with much longer battery lives. Other applications are smaller and quieter boat motors, even more efficient MAGLEV trains, superconducting magnets, and a large amount of medical applications (Lundy, Swartzendruber, Bennett, 1989). These are the types of things that make the future a reality, and they should come around sooner or later.

By: Nicolas Chaparro, University of Florida

References:
Lundy, D. R., Swartzendruber, L. J., & Bennett, L. H. (1989). A Brief Review of Recent Superconductivity Research at NIST. Journal of research of the National Institute of Standards and Technology, 94(3), 147–178. doi:10.6028/jres.094.018

Johnson, S. (n.d.). Cooked Chicken Breast [Cooked chicken breast on a plate with rosemary.]. Retrieved from https://www.flickr.com/photos/...