If physics is innovates you, then this certainly has to be innovative too.........................................
Download official android app - http://www.appsgeyser.com/707739
Monday, December 30, 2013
Diamond Rain
Diamonds
big enough to be worn by Hollywood film stars could be raining down on Saturn
and Jupiter, US scientists have calculated.
New
atmospheric data for the gas giants indicates that carbon is abundant in its
dazzling crystal form, they say.
Lightning
storms turn methane into soot (carbon) which as it falls hardens into chunks of
graphite and then diamond.
These
diamond "hail stones" eventually melt into a liquid sea in the
planets' hot cores, they told a conference.
Continue
reading the main story
“
Start
Quote
People
ask me - how can you really tell? It all boils down to the chemistry. And we
think we're pretty certain”
Dr Kevin
Baines
University
of Wisconsin-Madison
The
biggest diamonds would likely be about a centimetre in diameter - "big
enough to put on a ring, although of course they would be uncut," says Dr
Kevin Baines, of the University of Wisconsin-Madison and Nasa's Jet Propulsion
Laboratory.
He added
they would be of a size that the late film actress Elizabeth Taylor would have
been "proud to wear".
"The
bottom line is that 1,000 tonnes of diamonds a year are being created on
Saturn.
"People
ask me - how can you really tell? Because there's no way you can go and observe
it.
"It
all boils down to the chemistry. And we think we're pretty certain."
Thunderstorm
alleys
Baines
presented his unpublished findings at the annual meeting of the Division for
Planetary Sciences of the American Astronomical Society in Denver, Colorado,
alongside his co-author Mona Delitsky, from California Speciality Engineering.
Saturn
Gigantic
storms on Saturn create black clouds of soot - which hardens into diamonds as
it falls
Uranus
and Neptune have long been thought to harbour gemstones. But Saturn and Jupiter
were not thought to have suitable atmospheres.
Baines
and Delitsky analysed the latest temperature and pressure predictions for the
planets' interiors, as well as new data on how carbon behaves in different
conditions.
They
concluded that stable crystals of diamond will "hail down over a huge
region" of Saturn in particular.
"It
all begins in the upper atmosphere, in the thunderstorm alleys, where lightning
turns methane into soot," said Baines.
"As
the soot falls, the pressure on it increases. And after about 1,000 miles it
turns to graphite - the sheet-like form of carbon you find in pencils."
By a
depth of 6,000km, these chunks of falling graphite toughen into diamonds -
strong and unreactive.
These
continue to fall for another 30,000km - "about two-and-a-half
Earth-spans" says Baines.
"Once
you get down to those extreme depths, the pressure and temperature is so
hellish, there's no way the diamonds could remain solid.
"It's
very uncertain what happens to carbon down there."
One
possibility is that a "sea" of liquid carbon could form.
"Diamonds
aren't forever on Saturn and Jupiter. But they are on Uranus and Neptune, which
are colder at their cores," says Baines.
'Rough
diamond'
The
findings are yet to be peer reviewed, but other planetary experts contacted by
BBC News said the possibility of diamond rain "cannot be dismissed".
"The
idea that there is a depth range within the atmospheres of Jupiter and (even
more so) Saturn within which carbon would be stable as diamond does seem
sensible," says Prof Raymond Jeanloz, one of the team who first predicted
diamonds on Uranus and Neptune.
"And
given the large sizes of these planets, the amount of carbon (therefore
diamond) that may be present is hardly negligible."
However
Dr Nadine Nettelmann, of the University
of California, Santa Cruz, said further work was needed to
understand whether carbon can form diamonds in an atmosphere which is rich in
hydrogen and helium - such as Saturn's.
55 Cancri
e
The
planet 55 Cancri e may not be so precious after all, a new study suggests
"Baines
and Delitsky considered the data for pure carbon, instead of a
carbon-hydrogen-helium mixture," she explained.
"We
cannot exclude the proposed scenario (diamond rain on Saturn and Jupiter) but
we simply have no data on mixtures in the planets. So we do not know if diamond
formation occurs at all."
Meanwhile,
an exoplanet that was believed to consist largely of diamond may not be so
precious after all, according to new research.
The
so-called "diamond planet" 55 Cancri e orbits a star 40 light-years
from our Solar System.
A study
in 2010 suggested it was a rocky world with a surface of graphite surrounding a
thick layer of diamond, instead of water and granite like Earth.
But new
research to be published in the Astrophysical Journal, calls this conclusion in
question, making it unlikely any space probe sent to sample the planet's
innards would dig up anything sparkling.
Carbon,
the element diamonds are made of, now appears to be less abundant in relation
to oxygen in the planet's host star - and by extension, perhaps the planet.
"Based
on what we know at this point, 55 Cancri e is more of a 'diamond in the
rough'," said author Johanna Teske, of the University of Arizona.
Watch videos:
1.
2.
Uncertainty Principle
One of the biggest problems with quantum experiments is the seemingly unavoidable tendency of humans to influence the situation and velocity of small particles. This happens just by our observing the particles, and it has quantum physicists frustrated. To combat this, physicists have created enormous, elaborate machines like particle accelerators that remove any physical human influence from the process of accelerating a particle's energy of motion.
Still, the mixed results quantum physicists find when examining the same particle indicate that we just can't help but affect the behavior of quanta -- or quantum particles. Even the light physicists use to help them better see the objects they're observing can influence the behavior of quanta. Photons, for example -- the smallest measure of light, which have no mass or electrical charge -- can still bounce a particle around, changing its velocity and speed.
This is called Heisenberg's Uncertainty Principle. Werner Heisenberg, a German physicist, determined that our observations have an effect on the behavior of quanta. Heisenberg's Uncertainty Principle sounds difficult to understand -- even the name is kind of intimidating. But it's actually easy to comprehend, and once you do, you'll understand the fundamental principle of quantum mechanics.
Imagine that you're blind and over time you've developed a technique for determining how far away an object is by throwing a medicine ball at it. If you throw your medicine ball at a nearby stool, the ball will return quickly, and you'll know that it's close. If you throw the ball at something across the street from you, it'll take longer to return, and you'll know that the object is far away.
The problem is that when you throw a ball -- especially a heavy one like a medicine ball -- at something like a stool, the ball will knock the stool across the room and may even have enough momentum to bounce back. You can say where the stool was, but not where it is now. What's more, you could calculate the velocity of the stool after you hit it with the ball, but you have no idea what its velocity was before you hit it.
This is the problem revealed by Heisenberg's Uncertainty Principle. To know the velocity of a quark we must measure it, and to measure it, we are forced to affect it. The same goes for observing an object's position. Uncertainty about an object's position and velocity makes it difficult for a physicist to determine much about the object.
Of course, physicists aren't exactly throwing medicine balls at quanta to measure them, but even the slightest interference can cause the incredibly small particles to behave differently.
This is why quantum physicists are forced to create thought experiments based on the observations from the real experiments conducted at the quantum level. These thought experiments are meant to prove or disprove interpretations -- explanations for the whole of quantum theory.
In the next section, we'll look at the basis for quantum suicide -- the Many-Worlds interpretation of quantum mechanics.
Watch video:
Maxwell's Equations
Maxwell's Equations are a set of 4 complicated equations that describe the world of electromagnetics. These equations describe how electric and magnetic fields propagate, interact, and how they are influenced by objects.
James Clerk Maxwell [1831-1879] was an Einstein/Newton-level genius who took a set of known experimental laws (Faraday's Law, Ampere's Law) and unified them into a symmetric coherent set of Equations known as Maxwell's Equations. Maxwell was one of the first to determine the speed of propagation of electromagnetic (EM) waves was the same as the speed of light - and hence to conclude that EM waves and visible light were really the same thing.
Maxwell's Equations are critical in understanding Antennas and Electromagnetics. They are formidable to look at - so complicated that most electrical engineers and physicists don't even really know what they mean. Shrouded in complex math (which is likely so "intellectual" people can feel superior in discussing them), true understanding of these equations is hard to come by.
This leads to the reason for this website - an intuitive tutorial of Maxwell's Equations. I will avoid if at all possible the mathematical difficulties that arise, and instead describe what the equations mean. And don't be afraid - the math is so complicated that those who do understand complex vector calculus still cannot apply Maxwell's Equations in anything but the simplest scenarios. For this reason, intuitive knowledge of Maxwell's Equations is far superior than mathematical manipulation-based knowledge. To understand the world, you must understand what equations mean, and not just know mathematical constructs. I believe the accepted methods of teaching electromagnetics and Maxwell's Equations do not produce understanding. And with that, let's say something about these equations.
Maxwell's Equations are laws - just like the law of gravity. These equations are rules the universe uses to govern the behavior of electric and magnetic fields. A flow of electric current will produce a magnetic field. If the current flow varies with time (as in any wave or periodic signal), the magnetic field will also give rise to an electric field. Maxwell's Equations shows that separated charge (positive and negative) gives rise to an electric field - and if this is varying in time as well will give rise to a propagating electric field, further giving rise to a propgating magnetic field.
To understand Maxwell's Equations at a more intuitive level than most Ph.Ds in Engineering or Physics, click through the links and definitions above. You'll find that the complicated math masks an inner elegance to these equations - and you'll learn how the universe operates the Electromagnetic Machine.
Watch video:
Sunday, December 29, 2013
Quantum Computing
The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn't count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.
Will we ever have the amount of computing power we need or want? If, as Moore's Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.
Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you'll learn what a quantum computer is and just what it'll be used for in the next era of computing.
You don't have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine. Most digital computers, like the one you are using to read this article, are based on the Turing Theory. Learn what this is in the next section.
Defining the Quantum Computer
The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.
Today's computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren't limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today's most powerful supercomputers.
This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today's typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).
Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system's integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.
Next, we'll look at some recent advancements in the field of quantum computing.
QUBIT CONTROL
Computer scientists control the microscopic particles that act as qubits in quantum computers by using control devices.
Ion traps use optical or magnetic fields (or a combination of both) to trap ions.
Optical traps use light waves to trap and control particles.
Quantum dots are made of semiconductor material and are used to contain and manipulate electrons.
Semiconductor impurities contain electrons by using "unwanted" atoms found in semiconductor material.
Superconducting circuits allow electrons to flow with almost no resistance at very low temperatures.
Today's Quantum Computers
Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.
The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers. Several key advancements have been made in quantum computing in the last few years. Let's look at a few of the quantum computers that have been developed.
1998
Los Alamos and MIT researchers managed to spread a single qubit across three nuclear spins in each molecule of a liquid solution ofa lanine (an amino acid used to analyze quantum state decay) or trichloroethylene (a chlorinated hydrocarbon used for quantum error correction) molecules. Spreading out the qubit made it harder to corrupt, allowing researchers to use entanglement to study interactions between states as an indirect method for analyzing the quantum information.
2000
In March, scientists at Los Alamos National Laboratory announced the development of a 7-qubit quantum computer within a single drop of liquid. The quantum computer uses nuclear magnetic resonance (NMR) to manipulate particles in the atomic nuclei of molecules of trans-crotonic acid, a simple fluid consisting of molecules made up of six hydrogen and four carbon atoms. The NMR is used to apply electromagnetic pulses, which force the particles to line up. These particles in positions parallel or counter to the magnetic field allow the quantum computer to mimic the information-encoding of bits in digital computers.
Researchers at IBM-Almaden Research Center developed what they claimed was the most advanced quantum computer to date in August. The 5-qubit quantum computer was designed to allow the nuclei of five fluorine atoms to interact with each other as qubits, be programmed by radio frequency pulses and be detected by NMR instruments similar to those used in hospitals (see How Magnetic Resonance Imaging Works for details). Led by Dr. Isaac Chuang, the IBM team was able to solve in one step a mathematical problem that would take conventional computers repeated cycles. The problem, called order-finding, involves finding the period of a particular function, a typical aspect of many mathematical problems involved in cryptography.
2001
Scientists from IBM and Stanford University successfully demonstrated Shor's Algorithm on a quantum computer. Shor's Algorithm is a method for finding the prime factors of numbers (which plays an intrinsic role in cryptography). They used a 7-qubit computer to find the factors of 15. The computer correctly deduced that the prime factors were 3 and 5.
2005
The Institute of Quantum Optics and Quantum Information at the University of Innsbruck announced that scientists had created the first qubyte, or series of 8 qubits, using ion traps.
2006
Scientists in Waterloo and Massachusetts devised methods for quantum control on a 12-qubit system. Quantum control becomes more complex as systems employ more qubits.
2007
Canadian startup company D-Wave demonstrated a 16-qubit quantum computer. The computer solved a sudoku puzzle and other pattern matching problems. The company claims it will produce practical systems by 2008. Skeptics believe practical quantum computers are still decades away, that the system D-Wave has created isn't scaleable, and that many of the claims on D-Wave's Web site are simply impossible (or at least impossible to know for certain given our understanding of quantum mechanics).
If functional quantum computers can be built, they will be valuable in factoring large numbers, and therefore extremely useful for decoding and encoding secret information. If one were to be built today, no information on the Internet would be safe. Our current methods of encryption are simple compared to the complicated methods possible in quantum computers. Quantum computers could also be used to search large databases in a fraction of the time that it would take a conventional computer. Other applications could include using quantum computers to study quantum mechanics, or even to design other quantum computers.
But quantum computing is still in its early stages of development, and many computer scientists believe the technology needed to create a practical quantum computer is years away. Quantum computers must have at least several dozen qubits to be able to solve real-world problems, and thus serve as a viable computing method.
Watch videos:
1.
2.
Thursday, December 26, 2013
Europa
Jupiter's icy moon Europa is slightly smaller than the Earth's Moon. Like the Earth, Europa is thought to have an iron core, a rocky mantle and a surface ocean of salty water. Unlike on Earth, however, this ocean is deep enough to cover the whole surface of Europa, and being far from the sun, the ocean surface is globally frozen over.
Europa orbits Jupiter every 3.5 days and is phase locked -- just like Earth's Moon -- so that the same side of Europa faces Jupiter at all times. However, because Europa's orbit is eccentric (i.e. an oval or ellipse not a circle) when it is close to Jupiter the tide is much higher than when it is far from Jupiter. Thus tidal forces raise and lower the sea beneath the ice, causing constant motion and likely causing the cracks we see in images of Europa's surface from visiting robotic probes.
This "tidal heating" causes Europa to be warmer than it would otherwise be at its average distance of about 780,000,000 km (485,000,000 miles) from the sun, more than five times as far as the distance from the Earth to the sun. The warmth of Europa's liquid ocean could prove critical to the survival of simple organisms within the ocean, if they exist.
Discovery:
Europa was discovered on 8 January 1610 by Galileo Galilei. The discovery, along with three other Jovian moons, was the first time a moon was discovered orbiting a planet other than Earth. The discovery of the four Galilean satellites eventually led to the understanding that planets in our solar system orbit the sun, instead of our solar system revolving around Earth. Galileo apparently had observed Europa on 7 January 1610, but had been unable to differentiate it from Io until the next night.
How Europa Got its Name:
Galileo originally called Jupiter's moons the Medicean planets, after the Medici family and referred to the individual moons numerically as I, II, III, and IV. Galileo's naming system would be used for a couple of centuries.
It wouldn't be until the mid-1800s that the names of the Galilean moons, Io, Europa, Ganymede, and Callisto, would be officially adopted, and only after it became apparent that naming moons by number would be very confusing as new additional moons were being discovered.
Europa was originally designated Jupiter II by Galileo because it was the second satellite of Jupiter. Europa is named for the daughter of Agenor. Europa was abducted by Zeus (the Greek equivalent of the Roman god Jupiter), who had taken the shape of a spotless white bull. Europa was so delighted by the gentle beast that she decked it with flowers and rode upon its back. Zeus seizing his opportunity rode away with her into the ocean to the island of Crete, where he transformed back into his true shape. Europa bore Zeus many children, including Minos.
Watch video:
Unified Field Theory
Unified field theory, in particle physics, an attempt to describe all fundamental forces and the relationships between elementary particles in terms of a single theoretical framework. In physics, forces can be described by fields that mediate interactions between separate objects. In the mid-19th century James Clerk Maxwell formulated the first field theory in his theory of electromagnetism. Then, in the early part of the 20th century, Albert Einstein developed general relativity, a field theory of gravitation. Later, Einstein and others attempted to construct a unified field theory in which electromagnetism and gravity would emerge as different aspects of a single fundamental field. They failed, and to this day gravity remains beyond attempts at a unified field theory.
At subatomic distances, fields are described by quantum field theories, which apply the ideas of quantum mechanics to the fundamental field. In the 1940s quantum electrodynamics (QED), the quantum field theory of electromagnetism, became fully developed. In QED, charged particles interact as they emit and absorb photons (minute packets of electromagnetic radiation), in effect exchanging the photons in a game of subatomic “catch.” This theory works so well that it has become the prototype for theories of the other forces.
During the 1960s and ’70s particle physicists discovered that matter is composed of two types of basic building block—the fundamental particles known as quarks and leptons. The quarks are always bound together within larger observable particles, such as protons and neutrons. They are bound by the short-range strong force, which overwhelms electromagnetism at subnuclear distances. The leptons, which include the electron, do not “feel” the strong force. However, quarks and leptons both experience a second nuclear force, the weak force. This force, which is responsible for certain types of radioactivity classed together as beta decay, is feeble in comparison with electromagnetism.
At the same time that the picture of quarks and leptons began to crystallize, major advances led to the possibility of developing a unified theory. Theorists began to invoke the concept oflocal gauge invariance, which postulates symmetries of the basic field equations at each point in space and time (seegauge theory). Both electromagnetism and general relativity already involved such symmetries, but the important step was the discovery that a gauge-invariant quantum field theory of the weak force had to include an additional interaction—namely, the electromagnetic interaction. Sheldon Glashow, Abdus Salam, and Steven Weinberg independently proposed a unified “electroweak” theory of these forces based on the exchange of four particles: the photon for electromagnetic interactions, and two charged W particles and a neutral Z particle for weak interactions.
During the 1970s a similar quantum field theory for the strong force, called quantum chromodynamics (QCD), was developed. In QCD, quarks interact through the exchange of particles called gluons. The aim of researchers now is to discover whether the strong force can be unified with the electroweak force in a grand unified theory (GUT). There is evidence that the strengths of the different forces vary with energy in such a way that they converge at high energies. However, the energies involved are extremely high, more than a million million times as great as the energy scale of electroweak unification, which has already been verified by many experiments.
Grand unified theories describe the interactions of quarks and leptons within the same theoretical structure. This gives rise to the possibility that quarks can decay to leptons and specifically that the proton can decay. Early attempts at a GUT predicted that the proton’s lifetime must be in the region of 1032 years. This prediction has been tested in experiments that monitor large amounts of matter containing on the order of 1032 protons, but there is no evidence that protons decay. If they do in fact decay, they must do so with a lifetime greater than that predicted by the simplest GUTs. There is also evidence to suggest that the strengths of the forces do not converge exactly unless new effects come into play at higher energies. One such effect could be a new symmetry called “supersymmetry.”
A successful GUT will still not include gravity. The problem here is that theorists do not yet know how to formulate a workable quantum field theory of gravity based on the exchange of a hypothesized graviton. See also quantum field theory.
Watch video:
Wednesday, December 25, 2013
Electromagnetism
Magnetic Effect Of Current Or Electromagnetism
The term "magnetic effect of current" means that "a current flowing in a wire produces a magnetic field around it". The magnetic effect of current was discovered by Oersted in 1820. Oersted found that a wire carrying a current was able to deflect a magnetic needle. Now, a magnetic needle can only be deflected by a magnetic field. Thus it was concluded that a current flowing in a wire always gives rise to a magnetic field round it. The magnetic effect of current is called electromagnetism which means that electricity produces magnetism.
Tenets Of Electromagnetism:
Magnetic Field Pattern Due To Straight Current-Carrying Conductor
The magnetic lines of force round a straight conductor carrying current are concentric circles whose centers lie on the wire.
The magnitude of magnetic field produced by a straight current-carrying wire at a given point is:
Directly proportional to the current passing in the wire, and
Inversely proportional to the distance of that point from the wire.
So, greater the current in the wire, stronger will be the magnetic field produced. And greater the distance of a point from the current-carrying wire, weaker will be the magnetic field produced at that point.
Magnetic Field Pattern Due To A Circular Coil Carrying Current
We know that when a current is passed through a straight wire, a magnetic field is produced around it. It has been found that the magnetic effect of current increases if, instead of using a straight wire, the wire is converted into a circular coil. A circular coil consists of twenty or more turns of insulated copper wire closely wound together. When a current is passed through a circular coil, a magnetic field is produced around it. The lines of force are circular near the wire, but they become straight and parallel towards the middle point of the coil. In fact, each small segment of the coil is surrounded by such magnetic lines of force. At the center of the coil, all the lines of force aid each other due to which the strength of the magnetic field increases.
The magnitude of magnetic field produced by a current carrying wire at its center is:
Directly proportional to the current passing through the circular wire, and
Inversely proportional to the radius of the circular wire.
A current carrying circular wire (or coil) behaves as a thin disc magnet, whose one face is a north pole and the other face is a south pole.
The strength of magnetic field produced by a current carrying circular coil can be increased
By increasing the number of turns of wire in the coil
By increasing the current flowing through the coil
By decreasing the radius of the coil.
Solenoids
The solenoid is a long coil containing a large number of close turns of insulated copper wire. The magnetic field produced by a current carrying solenoid is similar to the magnetic field produced by a bar magnet. The lines of magnetic force pass through the solenoid and return to the other end. If a current carrying solenoid is suspended freely, it comes to rest pointing North and South like a suspended magnetic needle. One end of the solenoid acts like a N-pole and the other end a S-pole. Since the current in each circular turn of the solenoid flows in the same direction, the magnetic field produced by each turn of the solenoid adds up, giving a strong resultant magnetic field inside the solenoid. A solenoid is used for making electromagnets.
The strength of magnetic field produced by a current carrying solenoid is:
Directly proportional to the number of turns in the solenoid
Directly proportional to the strength of current in the solenoid
Dependent on the nature of "core material" used in making the solenoid. The use of soft iron rod as core in a solenoid produces the strongest magnetism.
Electromagnet:
An electric current can be used for making temporary magnets known as electromagnets. An electromagnet works on the magnetic effect of current. It has been found that if a soft iron rod called core is placed inside a solenoid, then the strength of the magnetic field becomes very large because the iron ore is magnetized by induction. This combination of a solenoid and a soft iron core is called an electromagnet. Thus, an electromagnet consists of a long coil of insulated copper wire wound on a soft iron core.
The electromagnet acts as a magnet only so long as the current is flowing in the solenoid. The moment the current is switched off the solenoid is demagnetized. The core of the electromagnet must be of soft iron because soft iron loses all of its magnetism when current in the coil is switched off. Steel is not used in electromagnets, because it does not lose all its magnetism when the current is stopped and becomes a permanent magnet.
Electromagnets can be made of different shapes and sizes depending on the purpose for which they are to be used.
Factors Affecting The Strength Of An Electromagnet:
The strength of an electromagnet is: 1) Directly proportional to the number of turns in the coil. 2) Directly proportional to the current flowing in the coil. 3) Inversely proportional to the length of air gap between the poles.
In general, an electromagnet is often considered better than a permanent magnet because it can produce very strong magnetic fields and its strength can be controlled by varying the number of turns in its coil or by changing the current flowing through the coil.
Watch video:
Renormalization
1. The Game Called "Renormalization"
Okay, let's see.... let's consider a quantum field theory whose Lagrangian has a few free parameters — masses and charges and so. Just to sound cool, let's call all of these numbers "coupling constants". Now to get finite answers from this theory, we need to impose a "frequency cutoff". We do this by simply ignoring all waves in our fields that have a a frequency higher than some fixed value. This works best after we replace "t" by "it" everywhere in our equations, so let's do that — this is called a "Wick rotation" by the experts. Now we're working with a theory on Euclidean spacetime, and the frequency cutoff can also be thought of as a distance cutoff. In other words, it amounts to ignoring effects that involve fields varying on distance scales shorter than some distance D.
In what follows, you have to keep your eye on the parameters in the theory: I'm gonna keep shuffling them around, so to check that I'm not conning you, you have to make sure there's always the same number of 'em around — sort of like watching a magician playing a shell game. So make sure you see what we're starting with! Our Lagrangian has some numbers in it called "coupling constants", but our theory really has one more parameter: the cutoff scale D.
Now our Lagrangian has some coupling constants in it, but it's hard to measure these directly. Even though they have names like "mass", "charge" and so on, these parameters aren't what you directly measure by colliding particles in an accelerator. In fact, if you try to measure the charge of the electron (say) by smashing two electrons into each other in an accelerator, seeing how much they repel each other, and naively using the obvious formula to determine their charge, the answer you get will depend on their momenta in the center-of-mass frame — or in other words, how hard you smashed them into each other. The same is true for the electron mass and any other coupling constants there are in the Lagrangian of our theory. They have a "bare" value — the value that appears in the Lagrangian — and a "physical" value — the value you measure by doing an experiment and an obvious naive sort of calculation. The "physical" values depend on the "bare" values, the cutoff D, and a momentum scale p.
(Of course, we could cleverly try to use a less naive formula to determine the bare values of the coupling constants from experiment, but let's not do that — let's just use the stupid obvious formula that neglects the funky quantum effects that are making the physical values differ from the bare values! By being deliberately "naive" here, we're actually being very smart here — as you'll eventually see.)
There are all sorts of games we can play now. The simplest, oldest game is this. We can measure the physical coupling constants at some momentum scale p, and then figure out which bare coupling constants would give these physical values — assuming some cutoff D. Then we can try to take a limit as D → 0, adjusting the bare coupling constants as we take the limit, in order to keep the predicted physical coupling constants at their experimentally determined values. This "continuum limit", if it exists, will be a theory without any shortest distance scale in it. That's very important if you think spacetime is a continuum!
This game is called "renormalization".
Sometimes you win this game — and sometimes you lose. The main thing to worry about is this: even if certain bare coupling constants are zero, the corresponding physical coupling constants may be nonzero. For example, if you start with a Lagrangian in which the mass of some particle is zero, you might not have bothered to include that mass among your bare coupling constants. But its physical mass (measured at some momentum scale) can still be nonzero. In this case, we say the particle "acquires a mass through its interactions with other particles". This sort of thing happens all the time.
What this means is that to succeed in adjusting the bare coupling constants to fit the experimentally observed physical coupling constants, we need to start with a Lagrangian that has enough bare coupling constants to begin with. You can't expect to fit N numbers with fewer than N numbers!
So, if someone hands you a Lagrangian, you may have to stick in some extra terms with some extra bare coupling constants before playing the renormalization game. If you can succeed with only finitely many extra terms, you say your theory is "renormalizable". If you need infinitely many terms, you throw up your hands in despair and say the theory is "nonrenormalizable". A nonrenormalizable Lagrangian is like a hydra-headed monster that keeps needing more extra terms to be added the more you add.
Note: when we try to take the continuum limit, we don't care if the bare coupling constants do something screwy like go to infinity. All we care about is whether the experimental predictions of our theory converge. If the bare coupling constants converge we say our theory is "finite". But truly realistic theories usually aren't this nice.
2. The "Renormalization Group" Game
Okay, now I want to talk about the renormalization group. I'm deliberately going to simplify things to the point where I'm verging on inaccuracy, but hopefully I won't actually say anything actually false.
So, let's recall what we've got. We have a quantum field theory described by a Lagrangian with a bunch of coupling constants in it — let's call them "bare" coupling constants. We can write all these bare coupling constants in a list and think of it as a vector: call it C. But to do calculations with this theory we need one more number, too: we need to ignore effects going on at length scales smaller than some distance D, called the "cutoff".
Now, starting from these numbers, we can compute the "physical" coupling constants at any momentum scale. For example, the measured charge of the electron depends on the momentum with which we collide two electrons. Another way to put it is that the physical coupling constants depend on a distance scale: for example, the measured charge of the electron depends on the distance at which you measure its charge. These two ways of thinking about it are equivalent, since using hbar and c we can freely convert between momentum and inverse distance.
Let's work with distance instead of momentum, and call the distance at which we measure the physical coupling constants D'.
So: if we know the "bare" coupling constants C and the cutoff D, we can compute the "physical" coupling constants C' at any distance scale D'. In short:
C' = f(C,D,D')
Now let's play the "renormalization group" game. In this game, we fix the bare coupling constants and the cutoff, and see how the physical coupling constants C' change as we vary the distance scale D' at which we measure them. It's fun to imagine turning a dial to adjust the distance scale D' and watching the physical coupling constants C' move around like a little dot in n-dimensional space, where n is the number of coupling constants. People draw pictures of this and speak of "running coupling constants" or the "renormalization group flow".
Note that we can play this game whether or not our field theory is renormalizable! In the last section I talked about a different game, called "renormalization". That game was all about letting the cutoff D go to zero. For "renormalizable" theories there's a nice way to do it, while for "nonrenormalizable" ones it's a real mess. But here we aren't letting D go to zero.
So what happens if we start with a nonrenormalizable theory and play this "renormalization group" game? Our Lagrangian will typically have a bunch of terms in it: some nasty ones that are making the theory nonrenormalizable, and some nice ones that would give a renormalizable theory if we just threw out the nasty ones. Each of these terms is multiplied by a coupling constant. Now let's look at the corresponding physical coupling constants as we crank up the distance scale D'.
As we do this, the physical coupling constants in front of the nasty nonrenormalizable terms get smaller and smaller, approaching zero! At large distances, nonrenormalizable interactions become irrelevant!
This is an incredibly important fact, because it may explain why the quantum field theory that seems to describe our world — the Standard Model — is renormalizable. There may be all sorts of strange quantum gravity stuff going on at very short distance scales — perhaps spacetime is not even a continuum! But if at larger scales we assume that ordinary quantum field theory on flat spacetime is a reasonably accurate approximation to what's going on, then this renormalization group stuff assures us that at still larger scales, nonrenormalizable interactions are going to look very weak.
In fact, this may explain why gravity is so weak! If we treat quantum gravity perturbatively as a quantum field theory on flat spacetime, it's nonrenormalizable. If we assume the gravitational constant is reasonably large near the Planck scale, and we follow the renormalization group flow, we find that it's very small at macroscopic distance scales. In fact, we even get the right order of magnitude. But this isn't surprising: it's really just the magic of dimensional analysis.
This sort of idea goes back to Kenneth Wilson who won the Nobel prize in physics in 1982, for work he did around 1972 on the renormalization group and critical points in statistical mechanics. His ideas are now important not only in statistical mechanics but also in quantum field theory. For a nice short summary of the "Wilsonian philosophy of renormalization", let me paraphrase Peskin and Schroeder:
In Chapter 10 we took the philosophy that the distance cutoff D should be disposed of by taking the limit D → 0 as quickly as possible. We found that this limit gives well-defined predictions only if the Lagrangian contains no coupling constants with dimensions of lengthd with d > 0. From this viewpoint, it seemed exceedingly fortunate that quantum electrodynamics, for example, contained no such coupling constants since otherwise this theory would not yield well-defined predictions.Wilson's analysis takes just the opposite point of view, that any quantum field theory is defined fundamentally with a distance cutoff D that has some physical significance. In statistical mechanical applications, this distance scale is the atomic spacing. In quantum electrodynamics and other quantum field theories appropriate to elementary particle physics, the cutoff would have to be associated with some fundamental graininess of spacetime, perhaps the result of quantum fluctuations in gravity. We discuss some speculations on the nature of this cutoff in the Epilogue. But whatever this scale is, it lies far beyond the reach of present-day experiments. Wilson's arguments show that this this circumstance explains the renormalizability of quantum electrodynamics and other quantum field theories of particle interactions. Whatever the Lagrangian of quantum electrodynamics was at the fundamental scale, as long as its couplings are sufficiently weak, it must be described at the energies of our experiments by a renormalizable effective Lagrangian.
3. Ultraviolet and Infrared Fixed Points
In the last section I described the "renormalization group" game. Now I want to explain "ultraviolet and infrared fixed points" of the renormalization group, but first let me summarize what I already said. We have a quantum field theory described by a Lagrangian with a bunch of terms multipled by numbers called "bare" coupling constants — we call the list of all of them C. We ignore effects going on at length scales smaller than some distance D called the "cutoff". And now we can compute stuff....
In particular, we can compute the so-called "physical" coupling constants C' as measured at any given length scale D'. And we can watch how C' changes as we slowly crank D' up. This is called the "renormalization group flow".
Various things can happen. I already said a bit about this: I said that for nonrenormalizable terms in the Lagrangian, the physical coupling constants shrink as we increase D'.
In fact we can say more: they scale roughly like D' to some negative power. If you're smart, you can even guess what this power is by staring at the term in question and doing some dimensional analysis! Using Planck's constant and the speed of light you can express all units in terms of length. If a particular bare coupling constant c in front of some term in the Lagrangian has dimensions of length to the power d, then the corresponding physical constant c' will scale roughly like D' to the power -d. More precisely:
c'/c ~ (D'/D)-d
In particular, this term will be nonrenormalizable if d is greater than zero.
Of course, another way to put this is that for nonrenormalizable theories, the physical coupling constants grow as we decrease D'. This is another way to see why nonrenormalizable theories are "bad" — they involve interactions that get ridiculously strong at short distance scales. Why is this bad? Well, it's certainly bad if you're trying to do perturbation theory and think of the interaction as a small perturbation. It may not always be bad in any more profound sense, because there arenonrenormalizable theories that are perfectly consistent, mathematically speaking.
On the other hand, if d is less than zero we say our term in the Lagrangian is "superrenormalizable". In this case the physical coupling constant scales roughly like D' to some positive power. In the same sense that nonrenormalizable theories are not nice, superrenormalizable theories are super-nice.
Finally, for "renormalizable" theories, the physical coupling constants scale roughly like D to the zeroth power — i.e., they're roughly constant. They are right on the brink between nasty and nice. We actually have to do a more careful analysis to see if they are nasty or nice. For example, quantum electrodynamics is renormalizable, but it turns out to be nasty: at first the charge of the electron looks almost constant as we decrease D', but it actually grows — logarithmically at first, but then faster and faster. On the other hand, lots of nonabelian gauge theories are nice: the coupling constant slowly shrinks to zero as we decrease D'. We say they are "asymptotically free".
Now, to get ready for my explanation about what all this has to do with 2nd-order phase transitions, let's just introduce some concepts to help us tie all these ideas together. We've seen that sometimes when we keep making D' smaller and smaller, the physical coupling constants C' approach some particular value. I've just talked about the case when they approach zero, but other cases are important too! Whenever this sort of thing happens, we say the limiting value of C' is an "ultraviolet fixed point of the renormalization group". Here "ultraviolet" refers to the fact that we are looking at ever smaller distance scales.
Similarly, if C' approaches some value when D' keeps getting larger, we say that value is an "infrared fixed point".
For example, suppose we have a superrenormalizable or asymptotically free theory with just one coupling constant. Then as we keep making D' smaller, the physical coupling constant approaches zero, so zero is an ultraviolet fixed point. Of course "zero" here corresponds to a free field theory with no interaction at all. So free theories are ultraviolet fixed points of superrenormalizable or asymptotically free theories. Similarly, free theories are infrared fixed points of nonrenormalizable theories, and certain renormalizable but nasty theories like quantum electrodynamics.
4. Second-Order Phase Transitions
Okay, now I'm going to finish by describing Wilson's ideas relating renormalization to 2nd-order phase transitions. First of all, what's a 2nd-order phase transition?
Actually, first of all, what's a first-order phase transition?
The most familiar examples are when ice melts or liquid water boils: we have two phases of matter, and the internal energy changes discontinuously as we go from one phase to another. But look at this phase diagram, which I borrowed from Scott Lanning:
\
\ liquid X (critical point)
| \ /
^ | solid \ /
| | \ /
P | /
R | /
E | /
S | /
S | / gas
U |/
R |
E |_______________________________________
TEMPERATURE --→
We see something interesting: the sharp boundary between liquid and gas phases fizzles out at a point called the "critical point". Above this point there is no real difference between a liquid and gas! This critical point is a "2nd-order phase transition", because while the internal energy doesn't change discontinuously there, its first derivative becomes infinite there.
Right at the critical point, something very cool happens: the system transforms in a simple way under scaling! What does this mean? Well, if you get some water right at the critical point, it looks "opalescent" like a moonstone. If you stare at it carefully, you'll see a bunch of liquid water droplets of all different sizes floating around in steam. However, if you look closely at any of these droplets, you see they are full of bubbles of steam, and if you look closely at the steam, you see it's full of little droplets of liquid! It's like a random fractal: no matter how closely you look, you see the same thing. You can't tell if you're looking at water droplets in steam or bubbles of steam in water, and there is no distinguished length scale... at least until you get down to the scale of atoms, that is.
Building on insights due to Landau, Kadanoff and others, Wilson realized that you could come up with a very precise theory of critical points by taking advantage of this symmetry under change of scale. In particular, this theory lets us understand so-called "critical exponents".
To explain this, let me switch to a simpler example of a critical point. Consider a ferromagnet like a crystal of iron. At temperatures above a certain point called the Curie temperature, the iron will not be magnetized. But as we cool it below the Curie point the spins of certain electrons in the atoms will line up and the iron will become magnetized. If there is an external magnetic field around when we cool the iron below the Curie temperature, the spins will line up with this magnetic field. Suppose the magnetic field points along the z axis - either up or down. Then we have the following phase diagram:
^ |
| |
M |
A | magnetized up
G |
N |
E |
T | ----------TEMPERATURE-->---------X unmagnetized
I | (critical
C | point)
|
F | magnetized down
I |
E |
L |
D |
The sharp boundary between the "up" and "down" magnetized phases fizzles out at the Curie temperature. The Curie temperature is a critical point! Right at this critical point the magnet displays symmetry under scaling. If we look at the atoms in the crystal lattice and see which ones are "spin-up" and which ones are "spin-down", at the critical point we see regions of spin up and regions of spin down, but all these regions are speckled with smaller regions of the opposite type, and so on... on down to the length scale set by the crystal lattice itself.
To describe this scaling symmetry a bit more mathematically, let's simplify things a bit and imagine that for each point x in the crystal lattice we have a variable s(x) which equals 1 if that atom is spin-up and -1 if it's spin-down. When the crystal is in thermal equilibrium this variable keeps randomly flipping sign, so we can think of it as a random variable. This means we can talk about its mean, standard deviation and stuff like that.
When the external magnetic field is zero, the mean of s(x) is zero:
<s(x)> = 0
because each atom has a 50-50 chance of being spin-up or spin-down. This isn't particularly interesting. What's interesting is the mean of the product of s(x) and s(y) for two different points in the lattice, x and y:
<s(x)s(y)>
This is called a "2-point function". It measures the correlation of spins at different points in the lattice, since it equals 1 if the two spins always point the same way and 0 if they are completely uncorrelated.
The 2-point function only depends on the distance between x and y. Away from the critical point it decays exponentially with distance (at least when the external magnetic field is zero), and this exponental decay determines a special length scale called the "correlation length":
<s(x)s(y)> ~ exp(-|x-y|/L)
But as we approach the critical point, the correlation length goes to infinity, and right at the critical point, the 2-point function decays like some power of distance:
<s(x)s(y)> ~ 1/|x-y|d
The number d is an example of what we call a "critical exponent".
A similar thing is true for all the higher "n-point functions", at least if we define them correctly, which I won't bother to do here. They all satisfy nice power laws at the critical point. This is what people mean when they say that a system at a critical point transforms simply under scaling.
Now, I'm oversimplifying something important here, so I'd better explain it. These power laws like
<s(x)s(y)> ~ 1/|x-y|d
are really only approximate! Actually this is obvious, because the left hand side can't get bigger than 1, while the right hand side goes to infinity as |x-y| goes to zero. In reality, the the 2-point function behaves in a very complicated way when the distance between our two atoms is very small. It's only when the distance gets big that things simplify and the power law becomes a better and better approximation.
Does this remind you of anything?
It should: this is where the renormalization group comes in! We can imagine "zooming out" on our crystal, looking at it from ever larger distance scales. As we do, things simplify: we can forget about individual atoms and approximate the situation by a field theory defined in the continuum. In fact, we can try to use one of the field theories that we've been talking about in the previous sections! Remember, quantum field theory in Euclidean space is just the same as statistical mechanics. Quantum field theory needs a cutoff, but we've got one: the distance between atoms in our crystal. So we're all set: we can write down some Lagrangian and start playing the renormalization group game to see what happens as we zoom out.
You may be suspicious here: how are we ever going to guess which Lagrangian corresponds to our original problem involving a crystal of iron? After all, iron is complicated stuff!
Luckily, it's not so bad. At short distance scales, to get a decent approximation to our original problem, we may need to start with a really complicated Lagrangian. However, suppose we do this. Then as we zoom out to large distance scales, the renormalization group game says that the Lagrangian will simplify. For example, we've already seen that nonrenormalizable terms in the Lagrangian become "irrelevant" as we go to large distance scales: the physical coupling constants in front of them go to zero!
More generally, we shouldn't be at all surprised if our physical coupling constants approach an infrared fixed point as we zoom out, letting the distance scale approach infinity. This is exactly what infrared fixed points are all about! Even better, all sorts of theories with different bare coupling constants can approach the same infrared fixed point. We say two different theories, or two different physical systems, are in the same "universality class" if they approach the same infrared fixed point as we crank up the distance scale.
For example, when we're studying what happens at the Curie temperature, lots of different ferromagnets lie in the same universality class. Indeed, it turns out that you can study a lot of them using slight variations of one of the simplest quantum field theories of all: the φ4 theory.
There is a lot more to say, and I'm too tired to say most of it, but there's one thing I must tell you, just to wrap up some loose ends. Wilson's real triumph was to calculate critical exponents like the number d in the power law for the 2-point function:
<s(x)s(y)> ~ 1/|x-y|d
How did he do it? Well, Landau already had one way to do this, which gives just the results you would guess using dimensional analysis. But that method didn't always give the right answers. To get the right answers, it helps to realize that n-point functions are closely related to physical coupling constants. In fact, while I never actually defined the physical coupling constants, they are really just a way of extracting some information about n-point functions. So if we calculate the "running of coupling constants" using the renormalization group game, we can work out the critical exponents.