Nexelia Academy · Official Revision Notes
Complete A-Level revision notes · 26 chapters
This chapter introduces the fundamental concept of physical quantities, emphasizing that they are composed of both a numerical magnitude and a unit. It establishes the internationally agreed-upon SI system, covering base and derived units, and provides methods for checking the dimensional consistency of equations. The chapter also delves into the crucial aspects of errors and uncertainties in measurements, differentiating between systematic and random errors, and explaining the distinction between accuracy and precision. Finally, it introduces scalar and vector quantities, detailing their representation and methods for addition, subtraction, and resolution.
physical quantity — A feature of something which can be measured, consisting of a numerical value and a unit.
Physical quantities are fundamental to physics, allowing for objective description and comparison of phenomena. Like a recipe ingredient that needs both a number (e.g., 2) and a unit (e.g., cups) to be meaningful, a physical quantity needs both a magnitude and a unit for complete specification.
Students often think that a number alone is sufficient to describe a quantity, but actually the unit is vital for context and meaning. Always include the correct unit with any numerical answer for a physical quantity; omitting units is a common error that loses marks.
Système Internationale (SI) — A single, internationally agreed-upon system of units based on the metric system of measurement.
The SI system provides a coherent set of units for all physical quantities, facilitating global scientific communication and consistency in measurements. Like a universal language for measurements, SI ensures that scientists worldwide can understand and replicate each other's work without confusion over different unit systems.
Always use SI units in calculations unless explicitly instructed otherwise, and ensure all conversions to SI are correct.
base quantities — The fundamental physical quantities upon which the SI system is founded.
These are quantities that are considered to be dimensionally independent and cannot be expressed in terms of other base quantities. There are seven SI base quantities, each with a defined base unit, acting like the primary colours from which all other colours can be mixed.
base units — The unique units defined at world conventions for each of the seven fundamental or base quantities.
These units form the foundation of the SI system, with all other units (derived units) being expressed as combinations of these base units. Their precise definitions ensure consistency in measurement, much like standard weights and measures kept in a national vault.
Students often confuse base units with derived units, but actually base units are fundamental and cannot be broken down further into other units, unlike derived units.
derived units — Units for quantities that are expressed as products or quotients of the SI base units.
All physical quantities apart from the base quantities have derived units. These units are formed by combining base units according to the physical relationships between quantities, similar to how complex words are formed by combining simpler letters.
Be able to recall and list the five AS-level SI base quantities (mass, length, time, electric current, temperature) and their symbols and units. When asked to express a derived unit in base units, ensure you only use the symbols for the base units (e.g., kg m s-2 for Newton).
homogeneous — An equation is homogeneous if each term involved in the equation has the same base units.
Checking for homogeneity is a crucial way to verify the dimensional consistency of an equation. If an equation is not homogeneous, it is dimensionally incorrect and therefore invalid, much like ensuring all ingredients in a recipe are measured in the same type of unit before adding them.
Students often think that if an equation is homogeneous, it must be correct, but actually homogeneity only confirms dimensional consistency; numerical factors or the overall validity might still be wrong.
When asked to show an equation is homogeneous, explicitly state the base units for each term and show they are identical. Do not forget to mention that pure numbers have no units.
order of magnitude — The power of ten to which a number is raised, used to estimate the size of a quantity.
Estimating the order of magnitude provides a quick way to check if a calculated answer is sensible, especially in physics where quantities can vary enormously. It helps in identifying gross errors in calculations, like quickly estimating if a number is in the hundreds, thousands, or millions.
The SI system uses prefixes to denote decimal submultiples or multiples of units, such as milli- or kilo-. Understanding these prefixes is essential for expressing quantities across vast scales. Making reasonable estimates of physical quantities, often to one significant figure or by their order of magnitude, is a valuable skill for checking the plausibility of calculations and understanding the scale of phenomena within the syllabus.
uncertainty — The total range of values within which a measurement is likely to lie.
Uncertainty quantifies the doubt associated with a measurement, indicating the interval where the true value is expected to be found. It is inherent in all measurements and can be expressed as absolute or percentage uncertainty, much like a weather forecast giving a temperature range rather than a single exact value.
Students often think uncertainty implies a mistake, but actually it acknowledges the inherent limitations of measurement, not an error in technique.
absolute uncertainty — The range of values (e.g., ±0.5 cm) that directly indicates the uncertainty in a measurement.
This is the direct numerical value of the uncertainty, expressed in the same units as the measured quantity. It represents the maximum possible deviation from the stated value. For example, if a length is 10 cm ± 0.1 cm, the ±0.1 cm is the absolute uncertainty.
percentage uncertainty — The absolute uncertainty expressed as a percentage of the measured value.
This provides a relative measure of uncertainty, useful for comparing the precision of different measurements or for combining uncertainties in multiplication and division. It is calculated as (absolute uncertainty / measured value) × 100%. If a measurement is 100 g ± 1 g, the percentage uncertainty is 1%.
Uncertainty in sum/difference
For quantities added or subtracted (x = y + z or x = y - z), the absolute uncertainties are summed.
Fractional uncertainty in product/quotient
For quantities multiplied or divided (x = Ayz or x = Ay/z, where A is a constant), the fractional uncertainties are summed.
Percentage uncertainty in product/quotient
For quantities multiplied or divided (x = Ayz or x = Ay/z, where A is a constant), the percentage uncertainties are summed.
Fractional uncertainty with powers
For quantities raised to a power (x = Ay^a z^b, where A is a constant), the fractional uncertainties are multiplied by their respective powers and then summed.
Percentage uncertainty with powers
For quantities raised to a power (x = Ay^a z^b, where A is a constant), the percentage uncertainties are multiplied by their respective powers and then summed.
State uncertainty to one significant figure and the measured value to the same number of decimal places as the uncertainty. When combining uncertainties for addition or subtraction, always add the absolute uncertainties. When combining uncertainties for multiplication or division, add the percentage uncertainties.
accuracy — The closeness of a measured value to the 'true' or 'known' value.
Accuracy reflects how well a measurement represents the actual value of the quantity being measured. It is affected by systematic errors and can be improved by reducing them, much like hitting the bullseye on a dartboard.
precision — How close a set of measured values are to each other.
Precision refers to the reproducibility and consistency of measurements. A precise set of readings will have a small spread of values, indicating low random error, even if they are not close to the true value. This is like hitting the same spot on a dartboard repeatedly, even if it's not the bullseye.
Students often confuse accuracy with precision, but actually accuracy is about being close to the true value, while precision is about consistency of readings.
systematic error — An error that results in all readings being either above or below the true value by a fixed amount and in the same direction.
Systematic errors consistently shift measurements away from the true value. They cannot be reduced by repeating readings and averaging, but rather by improving experimental techniques or calibrating instruments. These errors affect accuracy, similar to a weighing scale that always reads 1 kg too high.
Students often think systematic errors can be eliminated by taking many readings and averaging, but actually averaging only reduces random errors; systematic errors require technique or instrument correction.
zero error — A type of systematic error where the scale reading is not zero before measurements are taken.
This occurs when an instrument does not read zero when it should, leading to all subsequent measurements being consistently offset. It must be checked and corrected for before or during an experiment, like a ruler that starts at 1 cm instead of 0 cm.
reaction time — The delay between an experimenter observing an event and starting a timing device.
This is a systematic error in manual timing experiments. To minimize its effect, the duration of the event being timed should be significantly longer than the typical human reaction time, much like the slight delay between seeing a traffic light turn green and pressing the accelerator.
random error — An error that results in readings being scattered around the accepted value.
Random errors cause unpredictable variations in measurements, leading to a spread of readings. They can be reduced by repeating measurements and averaging the results, or by plotting graphs and drawing best-fit lines. These errors affect precision, like darts landing randomly around the bullseye.
parallax error — An error in reading a scale from different angles, causing the apparent position of the indicator to shift.
This error occurs when the observer's eye is not perpendicular to the scale, leading to an incorrect reading. It can be a random error if viewing angle varies, or systematic if always viewed from the same non-normal angle, similar to looking at a speedometer from the passenger seat.
Identify common sources of systematic error (e.g., zero error, wrongly calibrated scale, reaction time) and suggest specific methods to reduce them in experimental design questions. Suggest repeating readings and calculating an average, or plotting a graph and drawing a best-fit line, as methods to reduce random errors and improve precision.
micrometer screw gauge — A precision measuring instrument used to measure small lengths, typically to the nearest one-hundredth of a millimetre.
It consists of a U-shaped frame, an anvil, a spindle, a thimble, and a ratchet. The object to be measured is placed between the anvil and spindle, and the thimble is rotated until the ratchet slips, ensuring consistent pressure, much like a very fine-tuned caliper.
scalar quantity — A quantity which can be described fully by giving its magnitude and unit.
Scalar quantities have only size (magnitude) and a unit, and can be added algebraically using normal arithmetic rules. Examples include mass, speed, energy, and time, much like telling someone you have '5 dollars'.
vector quantity — A quantity which has magnitude, unit, and direction.
Vector quantities require both a size (magnitude) and a specific direction for their complete description. They cannot be added algebraically but require vector addition methods. Examples include velocity, acceleration, and force, similar to giving directions to 'walk 5 blocks north'.
Students often try to add vectors algebraically like scalars, but actually their directions must be considered using methods like vector triangles or resolution.
resultant — The combined effect of two or more vectors.
The resultant vector represents the single vector that would produce the same effect as all the individual vectors acting together. It is found by vector addition. If two people push a box, the resultant force is the single push that would move the box in the same way as both people pushing together.
vector triangle — A graphical method for adding two vectors by representing them as two sides of a triangle, with the third side representing the resultant.
In a vector triangle, the vectors are drawn head-to-tail, and the resultant is drawn from the tail of the first vector to the head of the second. This method accounts for both magnitude and direction, like a treasure map where each step is an arrow, and the final arrow from start to finish is the resultant.
resolution of vectors — The process of splitting a single vector into two or more component vectors.
A vector can be resolved into components, typically two perpendicular components, whose combined effect is equivalent to the original vector. This simplifies problem-solving, especially when dealing with forces or velocities at angles, much like breaking down a complex task into smaller, simpler sub-tasks.
components — The two or more vectors into which a single vector may be split.
These component vectors, when added together, produce the original vector. Resolving a vector into perpendicular components (e.g., horizontal and vertical) is a common and powerful technique in physics, similar to the x and y coordinates that define a point on a graph.
Students often forget that the components must be perpendicular for the simple trigonometric relationships (sin/cos) to apply directly.
Horizontal component of vector
Calculates the horizontal component of a vector given its magnitude (F) and angle (θ) relative to the horizontal. Applies to any vector quantity, not just force.
Vertical component of vector
Calculates the vertical component of a vector given its magnitude (F) and angle (θ) relative to the horizontal. Applies to any vector quantity, not just force.
Pythagoras' theorem
Relates the lengths of the sides of a right-angled triangle, allowing calculation of an unknown side if two are known.
Sine rule
Relates the sides of any triangle to the sines of their opposite angles, useful for solving non-right-angled triangles.
Cosine rule
Relates the sides of any triangle to one of its angles, useful for solving non-right-angled triangles when two sides and the included angle, or all three sides, are known.
Physics, like all sciences, relies on the scientific method, an iterative process of observing, measuring, collecting data, analysing patterns, developing theories, testing them, and modifying theories based on results. A theory, in this context, is a pattern discovered from analysed data that can be used to explain other events, representing a well-substantiated explanation, not merely a guess.
Students often think the scientific method is a linear process, but actually it is cyclical, with theories often being modified and re-tested. Also, students often think a theory is just a hypothesis or a guess, but actually in science, a theory is a thoroughly tested and widely accepted explanation.
When asked to describe an experimental procedure, ensure it reflects the iterative nature of scientific inquiry, including steps for data collection, analysis, and potential refinement. Distinguish clearly between a hypothesis (an educated guess) and a theory (a well-supported explanation) in your responses.
Always include the correct unit with any numerical answer for a physical quantity; omitting units is a common error that loses marks. When calculating a resultant, always consider both magnitude and direction, and use appropriate vector diagrams or trigonometric methods. When resolving a vector, always ensure the components are perpendicular to each other, and correctly use sine and cosine based on the angle given.
Definitions Bank
physical quantity
A feature of something which can be measured, consisting of a numerical value and a unit.
scientific method
A process of observing, measuring, collecting data, analysing patterns, developing theories, testing them, and modifying theories based on results.
theory
A pattern discovered from analysed data that can be used to explain other events.
Système Internationale (SI)
A single, internationally agreed-upon system of units based on the metric system of measurement.
base quantities
The fundamental physical quantities upon which the SI system is founded.
+21 more definitions
View all →Command Word Guide
| Describe | Provide a detailed account of a process or phenomenon, such as the scientific method, ensuring to highlight its iterative nature. For errors, describe their characteristics and effects on measurements. |
| Explain | Give reasons for a phenomenon or distinction. For example, explain why units are essential for physical quantities, or the difference between accuracy and precision, linking them to types of errors. |
| Suggest | Propose a method or instrument, often with justification. For instance, suggest appropriate instruments for measurement or methods to reduce specific types of errors. |
| Show that | Provide a clear, step-by-step derivation or proof. This often applies to checking the homogeneity of equations using base units or deriving base units for a derived quantity. |
+2 more
View all →Common Mistakes
Confusing a number alone with a physical quantity.
Always remember that a physical quantity requires both a numerical magnitude and a unit to be meaningful.
Believing the scientific method is linear.
The scientific method is cyclical and iterative, involving continuous observation, testing, and modification of theories.
Confusing base units with derived units, or thinking derived units are arbitrary.
Base units are fundamental and independent, while derived units are systematically constructed from base units based on physical laws.
+7 more
View all →This chapter introduces kinematics, focusing on describing motion using fundamental quantities like distance, displacement, speed, velocity, and acceleration. It covers graphical representations of motion, the derivation and application of equations for uniformly accelerated motion, including free fall, and extends to two-dimensional projectile motion.
Distance — The length along the actual path travelled from the starting point to the finishing point.
Distance is a scalar quantity, meaning it only has magnitude. It measures the total path covered by an object, regardless of its direction.
Displacement — The change of position of a particle.
Displacement is a vector quantity, representing the length travelled in a straight line in a specified direction from the starting point to the finishing point. It indicates both the magnitude and direction of the change in position.
Students often confuse distance (scalar) with displacement (vector) and speed (scalar) with velocity (vector). Remember that displacement and velocity include direction, while distance and speed do not.
Average speed — The distance moved along the actual path divided by the time taken.
Average speed is a scalar quantity that describes how fast an object is moving over a period, calculated by dividing the total distance by the total time.
Average speed
This is a scalar quantity, representing the overall rate of distance covered.
Velocity — A vector quantity representing the magnitude of how fast a particle is moving, and the direction in which it is moving.
Velocity is a vector quantity, meaning it has both magnitude (speed) and direction. It describes the rate at which an object's displacement changes.
Average velocity — The displacement divided by the time taken.
Average velocity is a vector quantity, calculated by dividing the total displacement by the total time taken. Its sign indicates the direction of motion.
Average velocity
This is a vector quantity; its sign indicates the direction of motion. and are initial and final positions, and are initial and final times.
Instantaneous velocity — The velocity of a particle at a particular moment in time, defined by making the intervals of time over which average velocity is measured shorter and shorter, equivalent to the gradient of the tangent to the displacement–time curve.
Instantaneous velocity provides the velocity at a specific point in time. On a displacement-time graph, it is found by calculating the gradient of the tangent at that particular moment.
Acceleration — A measure of the rate at which the velocity of the particle is changing.
Acceleration is a vector quantity that describes how quickly an object's velocity changes. This change can be in magnitude (speeding up or slowing down) or direction.
Average acceleration — The change in velocity divided by the time taken.
Average acceleration is a vector quantity, calculated by dividing the total change in velocity by the total time taken. Its sign indicates the direction of the acceleration.
Average acceleration
This is a vector quantity; its sign indicates the direction of acceleration. and are initial and final velocities, and are initial and final times.
Instantaneous acceleration — The average acceleration measured over extremely small time intervals, equivalent to the gradient of the tangent to the velocity–time curve.
Instantaneous acceleration gives the acceleration at a precise moment. On a velocity-time graph, it is determined by the gradient of the tangent at that specific time.
Deceleration — A negative acceleration, indicating that the final velocity is less than the initial velocity.
Deceleration specifically refers to a negative acceleration, which means the object is slowing down. It's important to note that a negative acceleration can also mean speeding up in the negative direction.
Students often misinterpret the sign of velocity or acceleration as always indicating speeding up or slowing down, rather than direction. Remember that a negative sign indicates direction relative to a chosen positive direction.
Motion can be effectively described using graphs. Displacement-time graphs show an object's position over time, where the gradient represents velocity. Velocity-time graphs illustrate an object's velocity over time, with the gradient indicating acceleration and the area under the graph representing displacement.
For graphical questions, clearly label axes and units, and show working for gradient and area calculations to ensure full marks.
For objects moving with constant acceleration in a straight line, a set of five kinematic equations, often referred to as SUVAT equations, can be used to solve problems. These equations relate displacement (s), initial velocity (u), final velocity (v), acceleration (a), and time (t).
Kinematic equation 1
This equation relates final velocity, initial velocity, acceleration, and time for uniformly accelerated motion in a straight line.
Kinematic equation 2
This equation relates displacement, initial velocity, acceleration, and time for uniformly accelerated motion in a straight line.
Kinematic equation 3
This equation relates displacement, final velocity, acceleration, and time for uniformly accelerated motion in a straight line.
Kinematic equation 4
This equation relates final velocity, initial velocity, acceleration, and displacement for uniformly accelerated motion in a straight line.
Kinematic equation 5
This equation relates displacement, initial velocity, final velocity, and time for uniformly accelerated motion in a straight line, using average velocity.
Students often incorrectly apply kinematic equations to non-uniformly accelerated motion or circular motion. Remember that these equations are only valid for uniform acceleration in a straight line.
When solving SUVAT problems, always list known quantities (s, u, v, a, t) and the unknown quantity you need to find, then choose the appropriate equation that includes these variables.
Acceleration of free fall — The uniform acceleration experienced by all objects falling freely near the Earth’s surface in the absence of air resistance, represented by the symbol g, with a value of 9.81 m s–2 and directed downwards.
Free fall is a specific case of uniformly accelerated motion where the only force acting on an object is gravity. The acceleration due to gravity, 'g', is approximately 9.81 m s⁻² and is always directed downwards.
Projectile motion describes the movement of a particle in a plane under the action of a constant force, such as gravity. This motion can be analysed by resolving it into two independent perpendicular components: uniform velocity horizontally and uniform acceleration (due to gravity) vertically.
Projectile motion — The motion of a particle moving in a plane under the action of a constant force, such as a ball thrown at an angle to the vertical.
Projectile motion is characterised by a constant horizontal velocity and a constant vertical acceleration (g). Air resistance is typically neglected in introductory problems.
Range — For a particle projected from a point on level ground, the horizontal distance from the point of projection to the point at which the particle reaches the ground again.
The range is a specific measure of horizontal displacement for projectiles launched and landing at the same vertical level. It depends on the initial velocity, launch angle, and acceleration of free fall.
Projectile range
This formula calculates the horizontal range for a particle projected from level ground at an angle to the horizontal, where is the initial projection velocity and is the acceleration of free fall.
Students often assume acceleration is zero at the highest point of projectile motion. Only the vertical component of velocity is zero at the peak; the acceleration due to gravity (g) is still acting downwards.
In projectile motion problems, always resolve the initial velocity into its horizontal and vertical components and treat these components of motion independently. This simplifies calculations significantly.
Students sometimes neglect air resistance when it is not explicitly stated to be negligible, or assume it is always negligible. Always consider the context of the problem.
Always state the direction when defining or calculating vector quantities like displacement, velocity, and acceleration. This is crucial for clarity and correctness in physics.
Ensure all units are consistent (e.g., convert km/h to m/s, minutes to seconds) before performing any calculations to avoid errors.
Pay close attention to significant figures in your final answers, typically providing 2 or 3 significant figures unless otherwise specified in the question.
Definitions Bank
Distance
The length along the actual path travelled from the starting point to the finishing point.
Displacement
The change of position of a particle. The length travelled in a straight line in a specified direction from the starting point to the finishing point.
Average speed
The distance moved along the actual path divided by the time taken.
Velocity
A vector quantity representing the magnitude of how fast a particle is moving, and the direction in which it is moving.
Average velocity
The displacement divided by the time taken.
+8 more definitions
View all →Command Word Guide
| Define | Provide a precise, concise, and accurate scientific definition, including units and scalar/vector nature where appropriate, without additional explanation or examples. |
| Describe | Give a detailed account of the characteristics or features of a phenomenon, process, or experiment, such as the steps to determine the acceleration of free fall using a falling object. |
| Explain | Provide reasons or justifications for a statement or observation, showing understanding of underlying principles, for example, explaining why horizontal and vertical components of projectile motion are independent. |
| Calculate | Determine a numerical value using given information and appropriate formulas, showing all working steps and units, and paying attention to significant figures. |
+2 more
View all →Common Mistakes
Confusing distance (scalar) with displacement (vector) and speed (scalar) with velocity (vector).
Remember that displacement, velocity, and acceleration are vector quantities and require both magnitude and direction, while distance and speed are scalar quantities and only have magnitude.
Assuming acceleration is zero at the highest point of projectile motion.
At the highest point of projectile motion, only the vertical component of velocity is zero. The acceleration due to gravity (g = 9.81 m s⁻²) is still acting downwards throughout the entire flight.
Not converting units to be consistent (e.g., km/h to m/s, minutes to seconds) before calculations.
Always convert all quantities to consistent SI units (metres, seconds, kilograms) before performing any calculations to avoid errors.
+3 more
View all →This chapter introduces the fundamental principles of dynamics, focusing on how forces cause changes in motion. It explores concepts such as momentum, Newton's laws of motion, and the effects of resistive forces, culminating in the understanding of terminal velocity and the conservation of momentum in collisions.
mass — Mass is a measure of the inertia of an object to change in velocity.
It is an intrinsic property of an object that quantifies its resistance to acceleration when a force is applied. The bigger the mass, the more difficult it is to change its state of rest or velocity. Imagine pushing an empty shopping cart versus a full one; the full cart has more mass and thus more inertia, making it harder to start or stop.
inertia — Inertia is the property of an object to stay in a state of rest or uniform velocity.
This property is directly related to an object's mass; objects with greater mass have greater inertia and thus resist changes to their motion more strongly. It is a key concept in Newton's first law. When a bus suddenly brakes, passengers lurch forward due to their inertia, as their bodies tend to continue moving at the bus's original velocity.
Students often think mass and weight are the same, but actually mass is a measure of inertia and quantity of matter, while weight is the force of gravity acting on that mass.
force — A force is an influence that can change the shape or dimensions of objects, or disturb the state of rest or uniform velocity of an object.
Forces are vector quantities, possessing both magnitude and direction, and their combined effect is known as the resultant force. They are fundamental to understanding motion and interactions between objects. Pushing a swing to make it move faster or in a different direction is an example of exerting a force.
Always remember to consider the direction of forces when solving problems, especially when resolving components or calculating resultant forces.
Students often think a force is always needed to keep an object moving, but actually a force is only needed to change an object's velocity (i.e., to accelerate it).
newton — One newton is defined as the force which will give a 1 kg mass an acceleration of 1 m s−2 in the direction of the force.
It is the SI unit of force, derived from Newton's second law (F=ma). This definition provides a precise way to quantify force based on measurable quantities of mass and acceleration. Holding a small apple (about 100g) in your hand exerts a force of approximately 1 Newton due to gravity.
Newton's Second Law (constant mass)
Applies to objects of constant mass. Force and acceleration are always in the same direction.
Newton's laws of motion describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. The first law introduces inertia, stating that an object will remain at rest or in uniform motion unless acted upon by a resultant force. The second law quantifies this relationship, linking resultant force to the rate of change of momentum, or to mass and acceleration for constant mass. The third law describes the interaction between two objects, stating that for every action, there is an equal and opposite reaction.
When explaining Newton's first law, explicitly mention inertia as the property that causes an object to resist changes in its state of motion.
weight — The force of gravity which acts on an object is called the weight of the object.
Weight is a vector quantity, always directed towards the center of the Earth, and is calculated as the product of an object's mass and the acceleration of free fall (W = mg). It is measured in Newtons. If you stand on a bathroom scale, it measures your weight, which is the force with which Earth's gravity pulls you down.
Weight
Weight is a force and is directed towards the center of the Earth.
normal contact force — A normal contact force is a force exerted by a surface on an object that acts perpendicularly to the plane of contact.
This force arises due to the contact between two objects and prevents them from passing through each other. For an object resting on a horizontal surface, it balances the object's weight. When you stand on the floor, the floor pushes up on your feet with a normal contact force, preventing you from falling through it.
Students often think the normal contact force is always equal to the weight of an object, but actually it only equals the weight if the surface is horizontal and there are no other vertical forces.
momentum — The momentum of a particle is defined as the product of its mass and its velocity.
It is a vector quantity, meaning it has both magnitude and direction, and its SI unit is kg m s−1 or N s. Momentum is a crucial concept for understanding collisions and interactions between objects, as it is conserved in isolated systems. A small bullet moving very fast can have the same momentum as a large, slow-moving bowling ball, illustrating that both mass and velocity contribute to momentum.
linear momentum — Linear momentum is the product of an object's mass and its velocity.
This term is used to distinguish it from angular momentum, which is not covered in this context. It is a vector quantity, and its conservation is a fundamental principle in physics, particularly in analyzing collisions. A train moving on a straight track has linear momentum, while a spinning top has angular momentum.
Momentum
Momentum is a vector quantity.
Students often think momentum is just about speed, but actually it also depends on mass, and it's a vector quantity, so direction is critical.
Newton's Second Law (rate of change of momentum)
This is the more general form of Newton's second law. For constant mass, it simplifies to F=ma.
impulse — If a constant force F acts on an object for a time Δt, the impulse of the force is given by FΔt.
Impulse is a vector quantity, measured in Newton seconds (N s), and is equal to the change in momentum of the object. It is particularly useful for analyzing forces that act over short durations, such as in collisions. When a baseball bat hits a ball, the impulse delivered by the bat causes a large change in the ball's momentum over a very short contact time.
Impulse
Impulse is a vector quantity. It is equal to the change in momentum.
Impulse-Momentum Theorem
Relates the impulse of a force to the change in momentum it causes. Useful for forces that vary over time by using average force.
isolated system — An isolated system is one on which no external resultant force acts.
In such a system, the total momentum remains constant, a principle known as the conservation of momentum. This concept is crucial for analyzing collisions and interactions where external influences are negligible. Imagine two billiard balls colliding on a perfectly frictionless table; the table and air are not exerting significant external forces, so the balls form an isolated system.
Principle of Conservation of Momentum
Applies to an isolated system (no external resultant force). Momentum is a vector quantity, so directions must be considered.
When applying the principle of conservation of momentum, always clearly define your isolated system to ensure no external forces are overlooked.
The principle of conservation of momentum is fundamental to understanding collisions. In an isolated system, the total momentum before a collision is equal to the total momentum after the collision. Collisions can be classified as either elastic or inelastic, depending on whether kinetic energy is conserved. This principle allows for the prediction of velocities after interactions.
elastic collision — An elastic collision is one in which the total kinetic energy remains constant.
In such collisions, no energy is lost to permanent deformation, heat, or sound. The relative speed of approach is equal to the relative speed of separation, and both momentum and kinetic energy are conserved. The collision of ideal gas molecules with the walls of a container is often modeled as a perfectly elastic collision.
Conservation of Kinetic Energy (Elastic Collision)
Applies only to perfectly elastic collisions. Kinetic energy is a scalar quantity.
Relative Speed (Perfectly Elastic Collision)
Applies only for a perfectly elastic head-on collision where u1 and u2 are speeds of approach and v1 and v2 are speeds of separation.
inelastic collision — An inelastic collision is one in which the total kinetic energy is not the same before and after the event.
While total energy and momentum are still conserved, some kinetic energy is transformed into other forms, such as heat, sound, or energy for permanent deformation. Objects may stick together after an inelastic collision. When a car crashes and crumples, it's an inelastic collision because kinetic energy is converted into deformation, heat, and sound.
Students often think that in an inelastic collision, momentum is not conserved, but actually momentum is always conserved in any collision within an isolated system, regardless of whether it's elastic or inelastic.
For elastic collisions, you can use both conservation of momentum and conservation of kinetic energy equations to solve for unknowns. For inelastic collisions, only apply the principle of conservation of momentum; do not assume kinetic energy is conserved.
frictional force — A frictional force acts along the common surface of contact between two objects and always acts in the opposite direction to the relative motion of the objects.
This force opposes motion or attempted motion between surfaces in contact. It is larger for rough surfaces and can be reduced by making surfaces smoother or applying lubricants. When you push a box across a floor, the force that makes it hard to move is friction, acting against your push.
viscous force — Viscous force (or drag force) describes the frictional force in a fluid (a liquid or a gas).
This force depends on the viscosity of the fluid and increases with the speed of the object moving through it. Air resistance is a common example of a viscous force. Swimming through water is harder than walking through air because water has a higher viscosity, resulting in a greater viscous force.
drag force — Drag force is another term used to describe the frictional force in a fluid (a liquid or a gas).
It opposes the motion of an object through the fluid and increases with the object's speed. Air resistance is a specific type of drag force. The force you feel pushing against you when you stick your hand out of a moving car window is drag force.
air resistance — Air resistance is an example of a viscous force that opposes the motion of an object through the air.
It is zero when an object's velocity is zero and increases with speed. For falling objects, it eventually balances the gravitational force, leading to terminal velocity. A feather falls slower than a stone because it experiences much greater air resistance relative to its weight.
upthrust — Upthrust (or buoyancy force) is an upward force experienced by an object immersed in a fluid due to the pressure of the fluid on it.
This force depends on the density of the fluid and the volume of the object submerged. While negligible in air for most objects, it is significant in denser fluids like water or oil. When you push a beach ball underwater, you feel an upward force pushing it back up; that's upthrust.
buoyancy force — Buoyancy force (or upthrust) is an upward force experienced by an object immersed in a fluid due to the pressure of the fluid on it.
This force is a consequence of the pressure difference between the top and bottom surfaces of the submerged object. It is proportional to the weight of the fluid displaced by the object. A boat floats because the buoyancy force from the water is equal to the boat's weight.
When objects move through fluids, they experience resistive forces such as viscous forces or drag, including air resistance. These forces oppose the motion and increase with speed. For a falling object, as its speed increases, the air resistance also increases until it balances the object's weight. At this point, the resultant force becomes zero, and the object ceases to accelerate, reaching a constant maximum velocity known as terminal velocity.
terminal velocity — Terminal velocity is the maximum (constant) velocity reached by an object moving through a resistive fluid when the resultant force on it becomes zero.
This occurs when the resistive force (viscous force/drag) becomes equal in magnitude and opposite in direction to the object's weight (and upthrust, if present). At this point, the object no longer accelerates. A parachutist reaches terminal velocity when the upward air resistance force equals their downward weight, allowing them to fall at a constant speed.
Clearly explain that at terminal velocity, acceleration is zero, but velocity is constant and non-zero, due to the balance of forces.
When applying Newton's Laws, clearly identify all forces acting on an object and draw free-body diagrams.
For momentum problems, define your system and clearly state the direction of velocities, using consistent sign conventions.
When solving problems involving F = ma, ensure the force F is the resultant force.
Definitions Bank
mass
Mass is a measure of the inertia of an object to change in velocity.
force
A force is an influence that can change the shape or dimensions of objects, or disturb the state of rest or uniform velocity of an object.
inertia
Inertia is the property of an object to stay in a state of rest or uniform velocity.
newton
One newton is defined as the force which will give a 1 kg mass an acceleration of 1 m s−2 in the direction of the force.
momentum
The momentum of a particle is defined as the product of its mass and its velocity.
+14 more definitions
View all →Command Word Guide
| Define | Provide the precise, formal definition of the term, often linking it to other fundamental concepts (e.g., mass to inertia, force to rate of change of momentum). |
| Explain | Go beyond a definition to describe the underlying physics, cause-and-effect relationships, or how a phenomenon occurs (e.g., explaining terminal velocity by detailing the balance of forces). |
| Calculate | Show all working, including formula, substitution with correct units, and the final answer with appropriate significant figures and units. Pay attention to vector directions for forces and momentum. |
| Describe | Give a clear, sequential account of a process or observation, often qualitatively (e.g., describing the motion of an object with air resistance). |
+1 more
View all →Common Mistakes
Confusing mass and weight.
Mass is a measure of inertia (kg), while weight is the force of gravity (N) acting on that mass (W=mg).
Believing a force is always needed to keep an object moving.
A force is only required to change an object's velocity (i.e., to accelerate it), not to maintain constant velocity.
Thinking inertia is a force.
Inertia is a property of matter, specifically its resistance to changes in motion, not a force itself.
+3 more
View all →This chapter explores forces, density, and pressure, focusing on how forces create turning effects and the conditions for equilibrium. It also covers the fundamental concepts of density and pressure, including hydrostatic pressure and the principle of upthrust.
Centre of gravity (C.G.) — The centre of gravity of an object is the point at which the whole weight of the object may be considered to act.
This point represents the average position of all the weight particles that make up an object. For uniform objects, it's often at the geometrical center. When an object is balanced at its C.G., it does not turn. Imagine trying to balance a complex toy on your finger; the specific point where it balances perfectly without tipping is its centre of gravity.
Students often think the centre of gravity is always at the geometric centre, but actually it depends on the distribution of mass within the object.
Moment of a force — The moment of a force is defined as the product of the force and the perpendicular distance of the line of action of the force from the pivot.
This quantity measures the turning effect of a force about a pivot. It depends on both the magnitude of the force and how far it acts from the pivot, specifically the perpendicular distance to the line of action. Opening a door: pushing closer to the hinges (pivot) requires more force to open it than pushing further away, because the perpendicular distance is smaller.
Moment of a force
d is the perpendicular distance from the pivot to the line of action of the force. This calculates the turning effect of a single force about a pivot.
Moment of a force (with angle)
This form is used when the force acts at a distance l from the pivot and the object is at an angle \theta to the horizontal, where l cos \theta gives the perpendicular distance. This calculates the turning effect of a force when the distance from the pivot is given along the object and an angle is involved.
Students often think the distance for moment is just any distance from the pivot, but actually it must be the perpendicular distance from the pivot to the line of action of the force.
Always clearly identify the pivot and the perpendicular distance in calculations for moments; a common error is using the wrong distance or not resolving the force correctly.
Couple — A couple consists of two forces, equal in magnitude but opposite in direction whose lines of action do not coincide.
A couple produces a pure rotational effect without causing any linear acceleration. The forces are parallel but separated, creating a turning effect. Turning a steering wheel: you apply equal and opposite forces on opposite sides of the wheel to make it turn without moving the car sideways.
Students often think a couple is just any two forces, but actually they must be equal in magnitude, opposite in direction, and have non-coinciding lines of action.
Torque of a couple — The torque of a couple is the product of one of the forces and the perpendicular distance between the forces.
This is the turning effect produced by a couple. Unlike the moment of a single force, torque specifically refers to the rotational effect of two forces forming a couple. Tightening a nut with a spanner (torque wrench): the torque applied determines how tight the nut becomes, which is a measure of the turning effect.
Torque of a couple
The 'force' refers to the magnitude of one of the two equal forces in the couple. This calculates the turning effect produced by a couple.
Students often think 'moment' and 'torque' are interchangeable for all turning effects, but actually 'torque' is specifically used for the turning effect of a couple.
When calculating torque, remember to use the perpendicular distance between the lines of action of the two forces, not the distance from a pivot.
Principle of moments — The principle of moments states that, for an object to be in rotational equilibrium, the sum of the clockwise moments about any point must equal the sum of the anticlockwise moments about that same point.
This principle is fundamental for analyzing objects that are not rotating or are rotating at a constant angular velocity. It ensures that there is no net turning effect on the object. A seesaw perfectly balanced: the turning effect of the child on one side (clockwise moment) is exactly equal to the turning effect of the child on the other side (anticlockwise moment).
A system is in equilibrium when there is no resultant force and no resultant torque. This means the object is either at rest or moving with constant velocity and constant angular velocity. The principle of moments is crucial for analyzing rotational equilibrium.
Students often think the principle of moments only applies about the centre of gravity, but actually it can be applied about any point for an object in equilibrium.
When applying the principle of moments, clearly state the point about which moments are being taken and consistently identify clockwise and anticlockwise moments.
Density — The density of a substance is defined as its mass per unit volume.
Density is a measure of how much mass is contained in a given volume of a substance. It is an intrinsic property of a material and helps distinguish different substances. Imagine a box filled with feathers versus the same box filled with rocks; the box of rocks has a higher density because it contains more mass in the same volume.
Density
Used to find the density of a substance given its mass and volume. \rho is density (kg m⁻³), m is mass (kg), V is volume (m³).
Always ensure units are consistent (kg and m³) when calculating density, and remember to convert given dimensions if necessary.
Pressure — Pressure is defined as force per unit area, where the force F acts perpendicularly to the area A.
Pressure describes how concentrated a force is over a given area. A smaller area for the same force results in higher pressure, and vice-versa. A sharp knife cuts better than a blunt one because the sharp edge has a much smaller area, concentrating the force into higher pressure.
Pressure
The force F must act perpendicularly to the area A. p is pressure (Pa), F is force acting perpendicularly (N), A is area (m²).
Students often think pressure is just force, but actually it's force distributed over an area, and the force must be perpendicular to that area.
When calculating pressure, ensure the force used is the component perpendicular to the area, and that units are in Newtons and square metres for Pascals.
Pressure within a liquid increases with depth due to the weight of the fluid above. This hydrostatic pressure is dependent on the density of the liquid, the acceleration of free fall, and the depth below the surface. Any external pressure, such as atmospheric pressure, also contributes to the total absolute pressure at a given depth.
Pressure in a liquid (due to depth)
This formula calculates the pressure due to the column of liquid above a certain depth. p is pressure (Pa), \rho is density of the liquid (kg m⁻³), g is acceleration of free fall (m s⁻²), h is depth below the surface of the liquid (m).
Change in hydrostatic pressure
This equation calculates the difference in pressure between two points in a fluid due to their difference in depth. \Delta p is change in pressure (Pa), \rho is density of the fluid (kg m⁻³), g is acceleration of free fall (m s⁻²), \Delta h is change in depth (m).
Total pressure in a fluid
This formula includes any external pressure (like atmospheric pressure) acting on the surface of the fluid. p is total absolute pressure (Pa), pA is atmospheric pressure (Pa), \rho is density of the fluid (kg m⁻³), g is acceleration of free fall (m s⁻²), h is depth below the surface of the fluid (m).
Upthrust — The upthrust acting on an object immersed in a fluid is equal to the weight of the fluid displaced.
Upthrust, also known as buoyancy force, is an upward force exerted by a fluid that opposes the weight of an immersed object. It arises from the pressure difference between the top and bottom surfaces of the object. When you push a beach ball underwater, you feel an upward force pushing it back up; that's the upthrust from the water.
Archimedes’ principle — The rule that the upthrust acting on an object immersed in a fluid is equal to the weight of the fluid displaced is known as Archimedes’ principle.
This principle explains why objects float or sink. If the upthrust is greater than the object's weight, it floats; if less, it sinks. It is a fundamental concept in fluid mechanics. A boat floats because it displaces a weight of water equal to its own weight, generating an upthrust that balances its weight.
Upthrust (Archimedes’ principle)
The volume V is the volume of the immersed part of the object, which is equal to the volume of the fluid displaced. F is upthrust (N), \rho is density of the fluid (kg m⁻³), g is acceleration of free fall (m s⁻²), V is volume of fluid displaced (m³).
Students often think upthrust depends on the weight of the object itself, but actually it depends only on the weight of the fluid displaced by the object.
Archimedes' principle (F = \rho g V) is key here; remember that \rho is the density of the fluid, and V is the volume of the fluid displaced (which is the volume of the immersed part of the object).
When solving equilibrium problems, clearly label all forces and distances on a diagram before applying the principle of moments. For moment calculations, always identify the pivot and ensure you are using the perpendicular distance; show this clearly in your working. Always state units for all calculated values (e.g., Nm for moment/torque, Pa for pressure, kg m⁻³ for density).
Definitions Bank
Centre of gravity (C.G.)
The centre of gravity of an object is the point at which the whole weight of the object may be considered to act.
Moment of a force
The moment of a force is defined as the product of the force and the perpendicular distance of the line of action of the force from the pivot.
Couple
A couple consists of two forces, equal in magnitude but opposite in direction whose lines of action do not coincide.
Torque of a couple
The torque of a couple is the product of one of the forces and the perpendicular distance between the forces.
Principle of moments
The principle of moments states that, for an object to be in rotational equilibrium, the sum of the clockwise moments about any point must equal the sum of the anticlockwise moments about that same point.
+4 more definitions
View all →Common Mistakes
Confusing the centre of gravity with the geometric centre.
The centre of gravity depends on the distribution of mass, not just the geometric shape.
Using any distance for moment calculations instead of the perpendicular distance.
The distance for calculating a moment must always be the perpendicular distance from the pivot to the line of action of the force.
Using 'moment' and 'torque' interchangeably for all turning effects.
'Torque' specifically refers to the turning effect of a couple, while 'moment' is for a single force.
+3 more
View all →This chapter introduces the fundamental concepts of work, energy, and power in physics. It defines work as force times displacement in the direction of the force, and energy as the ability to do work, emphasizing the principle of energy conservation. The chapter also covers efficiency, power as the rate of doing work, and derives formulas for gravitational potential energy and kinetic energy.
Work — Work is done when a force moves the point at which it acts (the point of application) in the direction of the force.
Work is a scalar quantity, measured in joules (J). It quantifies the energy transferred by a force acting over a distance. If the force and displacement are not aligned, only the component of the force in the direction of displacement does work. Imagine pushing a heavy box across a room; the effort you put in to move the box is the work done. If you push down on the box, but it only moves horizontally, only the horizontal component of your push does work.
Displacement — The term displacement represents the distance moved in a particular direction.
Displacement is a vector quantity, meaning it has both magnitude and direction. It is crucial for calculating work done, as work depends on the movement in the direction of the applied force. For example, if you walk 5 metres north, your displacement is 5 metres north. If you walk 5 metres north and then 5 metres south, your total displacement is zero, even though you walked a total distance of 10 metres.
Students often think that any force applied means work is done, but actually work is only done if there is displacement in the direction of the force.
Students often think displacement is the same as distance, but actually distance is a scalar (total path length) while displacement is a vector (change in position from start to end).
Work done (general)
Applies when force and displacement are in the same direction. If they are opposite, work done is negative.
Work done (with angle)
Use when the force and displacement are not in the same direction. F cos \theta is the component of force in the direction of displacement.
When calculating work done, always ensure the force and displacement are in the same direction; if not, resolve the force into its component along the displacement, or use the Fx cos \theta formula.
Energy — Anything that is able to do work is said to have energy.
Energy is a scalar quantity, measured in joules (J), representing the capacity to do work. It exists in various forms and can be converted from one form to another, but the total energy in the Universe remains constant. Think of energy as money in a bank account; you can convert it from cash to digital, or spend it on different things (doing work), but the total amount of money you have (or the total energy in the system) is conserved unless you add or remove it from the system.
Law of conservation of energy — Energy cannot be created or destroyed. It can only be converted from one form to another.
This fundamental law states that the total energy within a closed system remains constant. While energy can change forms (e.g., chemical to kinetic), the overall quantity of energy does not change. Imagine a set amount of water in a closed bottle; you can pour it into different shaped containers, or freeze it into ice, or boil it into steam, but the total amount of water in the bottle remains the same.
Students often think energy can be 'lost' or 'used up', but actually energy is always conserved; it is merely converted into less useful forms, often thermal energy.
When applying the conservation of energy, account for all energy forms, including thermal energy due to friction or resistance, to ensure the total energy before and after a process is equal.
Energy exists in various forms and can be converted from one form to another, but the total energy within a closed system remains constant, as stated by the Law of Conservation of Energy. For example, when a battery is connected to a lamp, chemical energy in the battery is converted into electrical energy, which then transforms into light energy and internal (thermal) energy in the lamp. When asked to 'map out energy changes', ensure you identify the initial and final forms of energy, as well as any intermediate conversions and 'wasted' forms like thermal energy.
Efficiency — Efficiency gives a measure of how much of the total energy may be considered useful and is not 'lost'.
Efficiency is the ratio of useful energy output to total energy input, expressed as a ratio or percentage. It can never be greater than 100% because energy cannot be created. If you put 100 units of effort into a task and only 80 units of that effort contribute to the desired outcome, your efficiency is 80%. The other 20 units were 'wasted' on things like friction or heat.
Efficiency (ratio)
Can be expressed as a ratio (0 to 1) or as a percentage (0% to 100%).
When calculating efficiency, clearly distinguish between the 'useful' energy output and the 'total' energy input, and remember to express it as a ratio or percentage as required.
Power — Power is the rate of doing work.
Power is a scalar quantity, measured in watts (W), which is equivalent to joules per second (J s⁻¹). It describes how quickly energy is converted or work is done. Two people might lift the same heavy box (doing the same amount of work), but the person who lifts it faster is more powerful.
Students often think 'power' is the same as 'strength' or 'force', but actually power is the rate at which work is done, while strength refers to the magnitude of a force.
Power (work done per unit time)
Power is the rate at which work is done or energy is converted.
Power (force and velocity)
Applies when the force F is acting in the direction of the constant velocity v.
Remember that power is a rate, so always include time in your calculations. When using P = Fv, ensure F is the force in the direction of velocity v.
Potential energy — Potential energy is the ability of an object to do work as a result of its position or shape.
This stored energy can be converted into other forms, such as kinetic energy, when the object's position or shape changes. Examples include gravitational potential energy and elastic potential energy. Imagine a stretched rubber band or a ball held high above the ground; both have stored energy due to their position or shape, which can be released to do work.
Elastic potential energy — Elastic potential energy is energy stored due to stretching or compressing an object.
This form of potential energy is stored in objects that have been deformed elastically, such as springs or stretched wires. It is released when the object returns to its original shape. A wound-up toy car spring stores elastic potential energy, which is then released to make the car move.
Gravitational potential energy — Gravitational potential energy is energy possessed by a mass due to its position in a gravitational field.
This energy is stored when work is done against gravity to raise a mass. It can be recovered when the mass falls, converting into kinetic energy. A book on a high shelf has gravitational potential energy; if it falls, that energy is converted into kinetic energy.
Change in Gravitational Potential Energy
Applies in a uniform gravitational field, typically near the Earth's surface. \Delta h is the vertical displacement.
When calculating changes in potential energy, remember that it's the change in height (\Delta h) that matters, not the absolute height, and ensure mass is in kilograms.
Kinetic energy — Kinetic energy is energy due to motion.
This energy is possessed by any moving object and is directly proportional to its mass and the square of its speed. It is a scalar quantity, measured in joules. A moving car has kinetic energy; the faster it moves, the more kinetic energy it has, and the more work it can do if it hits something and slows down.
Translational kinetic energy — The full name for the term Ek = 1/2mv² is translational kinetic energy because it is energy due to an object moving in a straight line.
This specific term distinguishes the kinetic energy associated with linear motion from rotational kinetic energy, which is energy due to an object spinning. A car driving straight down a road has translational kinetic energy. Its wheels also have rotational kinetic energy, but the car's overall forward motion is translational.
Rotational kinetic energy — Rotating objects also have kinetic energy and this form of energy is known as rotational kinetic energy.
This is the energy an object possesses due to its rotation around an axis. It depends on the object's moment of inertia and angular velocity. A spinning top has rotational kinetic energy; even if it's not moving across the floor, it still has energy due to its spin.
Kinetic Energy
Applies to translational kinetic energy. Mass must be in kilograms and speed in metres per second.
Students often think kinetic energy is proportional to velocity, but actually it is proportional to the square of the speed (v²), meaning doubling the speed quadruples the kinetic energy.
Always include units (Joules for energy, Watts for power) in your final answers and show your working clearly for derivation questions.
For power calculations, be mindful of the units: Watts (W) for power, Joules (J) for work, and seconds (s) for time.
Definitions Bank
Work
Work is done when a force moves the point at which it acts (the point of application) in the direction of the force.
Displacement
The term displacement represents the distance moved in a particular direction.
Energy
Anything that is able to do work is said to have energy.
Law of conservation of energy
Energy cannot be created or destroyed. It can only be converted from one form to another.
Efficiency
Efficiency gives a measure of how much of the total energy may be considered useful and is not 'lost'.
+7 more definitions
View all →Command Word Guide
| Calculate | Provide a numerical answer with correct units and appropriate significant figures, showing all working steps clearly. |
| Derive | Show the logical steps and intermediate equations used to reach the final formula, starting from fundamental principles (e.g., deriving P=Fv from P=W/t and W=Fx). |
| Map out energy changes | Identify the initial, intermediate, and final forms of energy, including any 'wasted' energy (e.g., thermal energy), using arrows to show conversions. |
| State | Give a concise answer without explanation or justification. |
+1 more
View all →Common Mistakes
Thinking work is done by any applied force.
Work is only done if there is displacement in the direction of the force.
Confusing displacement with distance.
Displacement is a vector (change in position from start to end), while distance is a scalar (total path length).
Believing energy can be 'lost' or 'used up'.
Energy is always conserved; it is merely converted into less useful forms, often thermal energy.
+7 more
View all →This chapter explores how solid objects change shape or size under applied forces, introducing fundamental concepts like stress, strain, and Hooke's law. It distinguishes between elastic and plastic deformation and explains how to calculate the energy stored during deformation.
Deformation — The change of shape or size of a solid object when forces are applied to it.
Deformation occurs when external forces cause a solid material to alter its dimensions. This can involve stretching, known as tensile deformation, or squeezing, referred to as compressive deformation.
Tensile deformation — A deformation if an object is stretched.
When an object undergoes tensile deformation, its length increases due to a pulling force. This stretching is caused by a tensile force, which is the load applied to extend the object.
Compressive deformation — A deformation if the object squeezed/compressed.
Compressive deformation occurs when an object is subjected to forces that push it together, causing it to shorten or decrease in volume. This is the opposite of tensile deformation.
Tensile force — The load or force that stretches a wire.
A tensile force is the external force applied to an object that causes it to stretch. This force is often referred to as the 'load' when discussing experiments involving hanging weights.
Load — The weight attached to the spring, or the tensile force that causes the extension.
The load is the force applied to a material, typically by attaching a weight, which results in its deformation. For a spring, it's the weight causing the extension.
Extension — The increase in length or deformation of the spring, equal to extended length – natural/original length.
Extension is the measurable increase in an object's length from its original, undeformed state. It is calculated by subtracting the original length from the extended length.
Hooke's Law
Applies provided the limit of proportionality is not exceeded.
Limit of proportionality — The point on a load-extension or stress-strain graph up to which the extension of the spring is proportional to the load.
The limit of proportionality marks the boundary on a force-extension graph where the linear relationship between force and extension ceases. Beyond this point, Hooke's law is no longer valid.
Spring constant (or force constant) — A constant k in Hooke's law (F = ke), representing the force per unit extension.
The spring constant, 'k', quantifies the stiffness of a spring or elastic object. A higher 'k' value indicates a stiffer object, requiring more force to achieve a given extension.
Spring Constant
Also known as force constant.
Students often confuse the limit of proportionality with the elastic limit. Remember that the elastic limit is usually just beyond the limit of proportionality.
Hooke's law describes the linear relationship between the force applied to an elastic object and its resulting extension. This law, expressed as F = ke, is valid only up to the limit of proportionality. Within this limit, the object exhibits elastic deformation, meaning it returns to its original shape and size once the deforming force is removed.
Always state the conditions under which a formula is valid, e.g., 'Assuming the limit of proportionality is not exceeded...' when applying Hooke's law or related formulas.
Tensile stress (σ) — The ratio of tensile force to the area normal to the force (F/A).
Tensile stress is a measure of the internal forces acting within a deformable body, specifically the force distributed over its cross-sectional area. It quantifies how concentrated the applied force is within the material.
Tensile Stress
Unit is newton per square metre or pascal.
Tensile strain (ε) — The ratio of extension to original length (e/Lo).
Tensile strain is a dimensionless quantity that describes the fractional change in length of a material when subjected to a tensile force. It indicates the relative deformation of the material.
Tensile Strain
Ratio of two lengths, no unit.
Students often forget that strain is a ratio and therefore unitless, which affects the units of Young modulus. Remember that strain has no units.
Young modulus (E) — The constant E defined as the ratio of stress to strain, provided the limit of proportionality is not exceeded.
The Young modulus, E, is a fundamental material property that quantifies its stiffness or resistance to elastic deformation under tensile or compressive stress. It is a constant for a given material, provided the deformation remains within the limit of proportionality.
Young Modulus (definition)
Applies provided the limit of proportionality is not exceeded.
Young Modulus (derived)
Used to determine the Young modulus of a metal.
To experimentally determine the Young modulus of a metal wire, one must measure its original length (L₀), its diameter (to calculate the cross-sectional area A), and a series of applied forces (F) and their corresponding extensions (e). By plotting stress (F/A) against strain (e/L₀), the gradient of the initial straight-line region of the graph yields the Young modulus, E. It is crucial to ensure that the limit of proportionality is not exceeded during the measurements.
When describing the experiment to find the Young Modulus, you must state the need to measure: original length (L₀), diameter (to find area A), and a series of forces (F) and corresponding extensions (e).
To find the Young Modulus from a graph, plot stress (y-axis) against strain (x-axis). The gradient of the initial straight line is the Young Modulus, E.
Elastic deformation — A deformation in which an object returns to its original shape and size when the force on it is removed.
Elastic deformation is a temporary change in shape or size. When the deforming force is removed, the material fully recovers its original dimensions, much like a stretched rubber band snapping back.
Elastic limit — The maximum force that can be applied to a wire/spring such that the wire/spring returns to its original length when the force is removed.
The elastic limit is the critical point beyond which a material will no longer fully return to its original shape after the deforming force is removed. If the force exceeds this limit, some permanent deformation will occur.
Plastic deformation — A deformation in which an object does not return to its original shape and size when the force on it is removed.
Plastic deformation is a permanent change in the shape or size of an object. Once a material undergoes plastic deformation, it retains some of its altered shape even after the applied force is removed.
Materials can exhibit either elastic or plastic behaviour when subjected to forces. Elastic deformation allows a material to fully recover its original shape and size once the force is removed, provided the elastic limit is not exceeded. Beyond this limit, the material undergoes plastic deformation, resulting in a permanent change in its dimensions.
Elastic potential energy (strain energy) — Energy stored in an object due to change of shape or size, which is completely recovered when the force causing deformation is removed.
Elastic potential energy, also known as strain energy, is the energy stored within a material when it is deformed elastically. This stored energy is fully released and recovered when the deforming force is removed, allowing the object to return to its original state.
Elastic Potential Energy (Force and Extension)
For a spring deformed within its limit of proportionality; represents the area under a linear force-extension graph.
Elastic Potential Energy (Spring Constant)
For a spring deformed within its limit of proportionality.
Elastic potential energy is the energy stored in a material due to its deformation. For deformations within the limit of proportionality, this energy can be calculated using the formulas E_p = 1/2Fx or E_p = 1/2kx^2. Graphically, the elastic potential energy is represented by the area under the force-extension graph. For non-linear regions, this area must be determined by methods such as counting squares.
Students often misinterpret the area under a force-extension graph when the relationship is non-linear. Remember that the area always represents work done, but 1/2Fx only applies for linear regions.
The area under a force-extension graph ALWAYS represents the work done on the material (or energy stored). For non-linear regions, you may need to count squares to find this area.
Distinguish between the spring constant 'k' (a property of a specific object) and the Young Modulus 'E' (a property of the material itself).
Always convert all units to standard SI units (e.g., cm to m, mm to m) when performing calculations to avoid errors.
Definitions Bank
Deformation
The change of shape or size of a solid object when forces are applied to it.
Tensile deformation
A deformation if an object is stretched.
Compressive deformation
A deformation if the object squeezed/compressed.
Tensile force
The load or force that stretches a wire.
Extension
The increase in length or deformation of the spring, equal to extended length – natural/original length.
+10 more definitions
View all →Command Word Guide
| Describe | For an experiment to determine the Young Modulus, expect a clear sequence of steps including measurements of original length, diameter, and corresponding forces and extensions, along with how these are used to calculate stress and strain. |
| Explain | When explaining concepts like elastic vs. plastic deformation, clearly state the behaviour of the material (returns to original shape/size or not) after the force is removed. |
| Calculate | Show all working, including formula, substituted values, and correct units. Pay attention to unit conversions (e.g., cm to m, mm to m). |
Common Mistakes
Confusing the limit of proportionality with the elastic limit.
The elastic limit is usually just beyond the limit of proportionality. Hooke's law (F ∝ x) only applies up to the limit of proportionality.
Incorrectly applying Hooke's law or elastic potential energy formulas beyond the limit of proportionality.
Hooke's law (F=kx) and the energy formula (Ep = ½kx²) are only valid for deformations within the limit of proportionality (the linear region of the force-extension graph).
Forgetting to convert units (e.g., cm to m, mm to m) when performing calculations.
Always convert all measurements to standard SI units (metres, kilograms, seconds) before substituting into formulas.
+2 more
View all →This chapter explores progressive waves, detailing their classification, fundamental properties, and the transfer of energy. It covers phenomena like the Doppler effect and polarisation, alongside the characteristics of the electromagnetic spectrum.
Progressive waves — Waves which transfer energy from place to place without the transfer of matter.
Progressive waves are a mechanism for energy propagation through a medium or space. Crucially, while energy moves, the material itself does not travel with the wave; particles simply oscillate around their equilibrium positions.
Transverse wave — One in which the vibrations of the particles in the wave are at right angles to the direction in which the energy of the wave is travelling.
In a transverse wave, the particles of the medium oscillate perpendicular to the direction the wave's energy is moving. Examples include waves on a rope or electromagnetic waves.
Longitudinal wave — One in which the direction of the vibrations of the particles in the wave is along or parallel to the direction in which the energy of the wave is travelling.
For a longitudinal wave, the particles vibrate back and forth in the same direction as the wave's energy propagation. Sound waves are a common example, involving compressions and rarefactions.
Students often confuse transverse and longitudinal wave particle motion relative to energy transfer direction. Remember that in transverse waves, vibrations are perpendicular, while in longitudinal waves, they are parallel to the energy transfer.
Displacement — Its distance in a specified direction from its rest/equilibrium position.
Displacement describes how far a particle in a wave has moved from its undisturbed, or equilibrium, position. It is a vector quantity, meaning it has both magnitude and direction.
Amplitude — The maximum displacement of a particle in the wave from its rest/equilibrium position.
The amplitude represents the greatest distance a particle moves from its equilibrium position during an oscillation. It is a measure of the wave's intensity or energy.
Wavelength — The smallest distance that shows the section of the wave that is repeated.
Wavelength can also be defined as the minimum distance between particles which are vibrating in phase with each other. It is a key characteristic for describing the spatial extent of a wave cycle.
Cycle or oscillation — The motion of any particle in the wave from the maximum positive displacement (a crest) to a maximum negative displacement (a trough) back to a maximum positive displacement.
A cycle, or oscillation, represents one complete repetition of the wave's pattern. For a particle, this means moving through all its possible displacements and returning to its starting point and direction of motion.
Period T — The time for a particle in the wave to complete one oscillation or one cycle.
The period is the duration required for a single particle in the wave to undergo one full cycle of vibration. It is inversely related to frequency.
Frequency f — The number of oscillations (cycles) per unit time.
Frequency quantifies how many complete cycles or oscillations occur within a given unit of time. It is measured in Hertz (Hz), where 1 Hz equals one cycle per second.
Period-Frequency Relationship
Relates the time for one cycle to the number of cycles per unit time.
Phase difference — A term used to compare the displacements and relative motions of particles in a wave.
Phase difference describes the relative positions and motions of two points on a wave, or two different waves. It indicates how 'out of step' they are, often expressed in degrees or radians.
Phase Difference (time)
Used to calculate phase difference between two waves of the same frequency based on time difference.
Phase Difference (distance)
Used to calculate phase difference between two waves of the same frequency based on distance difference.
Wavefront — Used to join points that are in phase.
A wavefront is an imaginary line or surface that connects all points on a wave that are vibrating in phase. For instance, the crests of a water wave form wavefronts.
The speed of a progressive wave, its frequency, and its wavelength are fundamentally linked. The wave equation, v = fλ, allows us to calculate any of these quantities if the other two are known. This equation is derivable from the definitions of speed, frequency (f = 1/T), and wavelength.
Wave Equation
Relates the speed of a progressive wave to its frequency and wavelength.
Be prepared to derive the wave equation v = fλ from the definitions of speed, frequency (f = 1/T), and wavelength.
A Cathode-Ray Oscilloscope (CRO) is a valuable tool for determining the frequency and amplitude of waves, particularly sound waves. By adjusting the time-base and Y-sensitivity settings, the CRO displays a trace from which the period and voltage amplitude can be measured directly. The frequency is then calculated as the reciprocal of the period.
For Cathode-Ray Oscilloscope (CRO) questions, show your full method: state the time-base/y-gain setting, show the number of divisions measured from the screen, and then calculate the final value.
Intensity — The power per unit area.
Intensity quantifies the rate at which energy is transferred by a wave per unit area. For a progressive wave, intensity is proportional to the square of its amplitude, meaning a small increase in amplitude leads to a significant increase in energy transfer.
Intensity (Power per Area)
Defines intensity as the amount of energy passing through unit area per unit time.
Intensity (Spherical Wave)
Applies when waves from a point source spread out equally in all directions, assuming no absorption of energy.
Students often assume intensity is directly proportional to amplitude, rather than amplitude squared. Remember that intensity is proportional to (amplitude)² for a progressive wave.
Refraction — The change in direction of a wave due to a change in speed.
Refraction occurs when a wave passes from one medium to another, causing its speed to change. This change in speed typically results in a change in the wave's direction, unless it enters the new medium perpendicularly.
When a wave changes medium, its frequency remains constant while its speed and wavelength change. Do not assume frequency changes.
Doppler effect — The frequency change due to the relative motion between a source of sound or light and an observer.
The Doppler effect describes the apparent shift in frequency of a wave when there is relative motion between its source and an observer. For sound, this is heard as a change in pitch, such as a siren's pitch changing as it passes by.
Doppler Effect (Sound Source Moving)
Use '-' when source moves towards observer (frequency increases), '+' when source moves away (frequency decreases). Applies only to sound waves.
Students often incorrectly apply the Doppler effect formula, especially the sign convention for source movement towards or away from the observer. Remember to use '-' for increasing frequency (source approaching) and '+' for decreasing frequency (source receding).
In Doppler effect calculations, first state whether you expect the observed frequency to increase or decrease to justify your choice of the ± sign in the formula.
All electromagnetic waves are transverse waves that travel with the same speed, c, in free space (approximately 3.00 × 10⁸ m s⁻¹). The electromagnetic spectrum encompasses a wide range of wavelengths, from long radio waves to very short gamma-rays. Visible light, which the human eye can detect, falls within the approximate wavelength range of 400–700 nm.
Polarised wave — A transverse wave in which vibrations occur in only one of the directions at right angles to the direction in which the wave energy is travelling.
Polarisation is a phenomenon exclusively associated with transverse waves. It describes waves where the oscillations are restricted to a single plane perpendicular to the direction of energy transfer. Unpolarised light, for example, has vibrations in all possible perpendicular planes.
Students often believe that longitudinal waves can be polarised, when polarisation is a phenomenon exclusive to transverse waves. Remember that sound waves, being longitudinal, cannot be polarised.
Malus's Law
Calculates the intensity of plane-polarised electromagnetic waves after transmission through a polarising filter.
For Malus's Law, remember that I₀ is the intensity of the incident *polarised* light on the analyser, not the initial unpolarised light intensity.
When defining terms, be precise. For 'wavelength', state it is the 'minimum distance between two points in phase'. For 'transverse wave', mention both particle vibration and energy transfer directions.
Distinguish between wave graphs: a displacement-time graph yields the period (T) and amplitude, while a displacement-position graph yields the wavelength (λ) and amplitude.
Definitions Bank
Progressive waves
Waves which transfer energy from place to place without the transfer of matter.
Transverse wave
One in which the vibrations of the particles in the wave are at right angles to the direction in which the energy of the wave is travelling.
Longitudinal wave
One in which the direction of the vibrations of the particles in the wave is along or parallel to the direction in which the energy of the wave is travelling.
Displacement
Its distance in a specified direction from its rest/equilibrium position.
Amplitude
The maximum displacement of a particle in the wave from its rest/equilibrium position.
+10 more definitions
View all →Command Word Guide
| Describe | Provide a clear, sequential account of a phenomenon or process, e.g., 'Describe wave motion as illustrated by vibration in ropes, springs and ripple tanks' would require explaining how particles move and how energy is transferred in each case. |
| Explain | Give reasons for a phenomenon or observation, linking cause and effect. For example, 'Explain why polarisation is a phenomenon associated with transverse waves' requires detailing the nature of transverse vibrations. |
| Calculate | Determine a numerical value using a formula or given data. Show all working steps and units, e.g., 'Calculate the observed frequency' using the Doppler effect formula. |
| Compare | Identify similarities and differences between two or more concepts. For instance, 'Compare transverse and longitudinal waves' requires stating both their commonality (energy transfer) and key differences (particle vibration direction). |
+1 more
View all →Common Mistakes
Confusing transverse and longitudinal wave particle motion relative to energy transfer direction.
Remember that in transverse waves, particle vibrations are perpendicular to energy transfer, while in longitudinal waves, they are parallel.
Incorrectly applying the Doppler effect formula, especially the sign convention for source movement towards or away from the observer.
Use '-' in the denominator when the source moves towards the observer (frequency increases) and '+' when the source moves away (frequency decreases).
Assuming intensity is directly proportional to amplitude, rather than amplitude squared.
Intensity is proportional to the square of the amplitude (I ∝ A²).
+2 more
View all →This chapter explores the principle of superposition, which governs how waves interact to produce interference patterns and stationary waves. It also covers diffraction, the spreading of waves, and its application in diffraction gratings for determining wavelengths.
Principle of superposition — when two or more waves meet at a point, the resultant displacement at that point is equal to the sum of the displacements of the individual waves at that point.
This fundamental principle describes how waves combine. When waves overlap, their individual displacements add up at each point in space and time to create a new, resultant wave. This addition can lead to either an increase or decrease in amplitude.
When defining the Principle of Superposition, you must mention 'displacement' and 'sum' to earn the marks.
Interference — where two or more waves meet or overlap to form a resultant wave. The resultant displacement at any point is the sum of displacements of the individual waves.
Interference is a direct consequence of the principle of superposition. When waves from different sources, or different parts of the same source, overlap, they combine to form a new wave pattern. This pattern can show regions of enhanced or diminished amplitude.
Fringes — The maxima and minima disturbances produced by interference.
When interference occurs, the resulting pattern often consists of alternating regions of maximum and minimum disturbance. These distinct bright and dark bands (for light) or loud and quiet regions (for sound) are known as fringes.
Interference pattern — The collection of fringes produced by the superposition of overlapping waves.
An interference pattern is the overall arrangement of these maxima and minima. It is a stable and observable pattern that arises when waves from coherent sources superpose, demonstrating the wave nature of phenomena like light and sound.
Coherent sources — Wave sources which maintain a constant phase difference.
For a stable and observable interference pattern to form, the waves must originate from coherent sources. This means the phase relationship between the waves must remain constant over time, ensuring that constructive and destructive interference occur at fixed locations.
Coherent waves — Two or more waves are coherent if they have a constant phase difference.
Coherent waves are essential for observing interference. If the phase difference between waves fluctuates randomly, the interference pattern will shift rapidly and average out, making it impossible to observe distinct fringes.
Students often assume that two separate light sources will produce an observable interference pattern. Remember that two independent light sources are incoherent and will not produce a stable, observable interference pattern because they do not maintain a constant phase difference.
Monochromatic light source — a source of one colour, and hence one wavelength λ.
A monochromatic light source emits light of a single wavelength. This is crucial for producing clear and distinct interference patterns, as different wavelengths would produce overlapping patterns, making analysis difficult.
Young's double-slit experiment is a classic demonstration of light interference. A monochromatic light source illuminates two narrow, closely spaced slits. The light diffracts through each slit, and the waves from the two slits then superpose, creating an interference pattern of bright and dark fringes on a screen. Similar setups can be used to demonstrate interference with sound and microwaves.
Fringe width — The distance x on the screen between successive bright fringes (or between successive dark fringes).
The fringe width, also known as fringe separation, is a measurable quantity in interference patterns. It represents the distance between the centres of adjacent bright fringes or adjacent dark fringes on the observation screen.
Fringe separation — The distance x on the screen between successive bright fringes (or between successive dark fringes).
This term is synonymous with fringe width. It is a key parameter used to calculate the wavelength of light in Young's double-slit experiment.
Double-slit interference fringe width
Applies for all types of waves, provided fringes are detected many wavelengths from sources. Here, D is the distance from the double slit to the screen, a is the distance between the centres of the slits, x is the fringe width, and λ is the wavelength of light.
For the double-slit formula λ = ax/D, be sure you can identify 'a' (slit separation), 'x' (fringe separation), and 'D' (slit-to-screen distance) from a diagram.
Always convert all length units (e.g., slit separation in mm, wavelength in nm) to metres before using interference or diffraction formulas.
Stationary waves — The wave patterns on vibrating strings, or the result of the overlapping and hence interference of two waves of equal frequency and amplitude, travelling along the same line with the same speed but in opposite directions.
Stationary waves, also known as standing waves, are formed when two identical progressive waves travelling in opposite directions superpose. Unlike progressive waves, they do not transfer energy and appear to oscillate in place, with fixed points of zero and maximum displacement.
Standing waves — Another term for stationary waves.
This is an alternative name for stationary waves, emphasizing their appearance of not propagating through space.
Progressive waves — Waves that transfer energy.
In contrast to stationary waves, progressive waves carry energy from one point to another. Examples include sound waves travelling through air or light waves propagating through space.
Students often confuse the properties of stationary waves with progressive waves. Remember that stationary waves have nodes and antinodes and do not transfer energy, whereas progressive waves do, and their waveform moves.
Stationary waves are formed by the superposition of two progressive waves of the same frequency and amplitude, travelling along the same line but in opposite directions. This typically occurs when a wave reflects off a boundary and interferes with the incident wave. The resulting pattern has fixed points of zero displacement and maximum displacement.
To explain the formation of a stationary wave, state that two waves of the same frequency travelling in opposite directions superpose (or interfere).
Nodes — Points of zero amplitude on a stationary wave.
Nodes are specific points along a stationary wave where the displacement is always zero. These points remain motionless, resulting from continuous destructive interference between the two superposing waves.
Antinode — A point of maximum amplitude on a stationary wave.
Antinodes are points along a stationary wave where the amplitude of oscillation is maximum. At these points, constructive interference consistently occurs, leading to the largest possible displacement.
In diagrams of stationary waves, clearly label the positions of nodes (N) and antinodes (A).
Stationary waves can be demonstrated using stretched strings or air columns. On a string, fixed ends act as nodes, and the string vibrates in specific modes. In air columns, reflections at the ends create stationary sound waves, with open ends typically being antinodes and closed ends being nodes. The particular frequencies at which these stationary waves are obtained are called resonant frequencies.
Fundamental mode of vibration — The simplest way a stretched string can vibrate, with a single loop.
This is the lowest frequency at which a string can vibrate to form a stationary wave. It corresponds to the first harmonic, where the string forms a single loop with nodes at both ends and an antinode in the middle.
First harmonic — Another term for the fundamental mode of vibration.
The first harmonic is the fundamental frequency of vibration for a system, such as a stretched string or an air column. It represents the simplest stationary wave pattern that can be formed.
First overtone — The next resonant frequency after the fundamental, corresponding to the second harmonic.
The first overtone is the next higher resonant frequency after the fundamental. For a stretched string, it corresponds to the second harmonic, where two loops are formed.
Second harmonic — The next resonant frequency after the fundamental, corresponding to the first overtone.
The second harmonic is twice the frequency of the fundamental. On a stretched string, it features two loops with three nodes and two antinodes.
Second overtone — The resonant frequency after the first overtone, corresponding to the third harmonic.
The second overtone is the resonant frequency after the first overtone. For a stretched string, it corresponds to the third harmonic, forming three loops.
Third harmonic — The resonant frequency after the first overtone, corresponding to the second overtone.
The third harmonic is three times the frequency of the fundamental. On a stretched string, it exhibits three loops with four nodes and three antinodes.
Students often misinterpret the 'harmonic' and 'overtone' terminology for stationary waves, especially in closed pipes where only odd harmonics are present. Remember that the first overtone is the second harmonic, the second overtone is the third harmonic, and so on.
Frequency of nth mode on a stretched string
n=1 for fundamental mode (first harmonic), n=2 for second harmonic, etc. L is the length of the string, c is the speed of progressive waves on the string, and f_n is the frequency of the nth mode.
End-correction — The distance of the antinode from the end of the tube, slightly outside the open end.
In resonance tubes, the antinode at an open end does not form exactly at the end of the tube but slightly beyond it. This small additional distance is known as the end-correction and must be accounted for in precise measurements.
Resonant frequencies — The particular frequencies at which stationary waves are obtained in a pipe.
These are the specific frequencies at which a system, like an air column, will naturally vibrate with large amplitudes, forming stable stationary wave patterns. At these frequencies, the conditions for constructive interference are met.
Frequency of nth mode in a closed tube
n=1 for fundamental mode (first harmonic), n=2 for first overtone (third harmonic), etc. Only odd harmonics are possible. L is the length of the tube, c is the speed of sound in air, and f_n is the frequency of the nth mode.
For stationary waves in a closed pipe, remember that only odd harmonics (1st, 3rd, 5th, etc.) are present.
Wavelength from resonance tube (sound)
Used to eliminate end-correction 'e'. L_1 is the length of the air column at first resonance, and L_2 is the length at second resonance.
Speed of sound from resonance tube
Derived from wavelength formula and c=fλ. f is the frequency of the tuning fork/loudspeaker, L_1 is the length of the air column at first resonance, and L_2 is the length at second resonance.
Diffraction — The spreading of a wave into regions where it would not be seen if it moved only in straight lines after passing through a narrow slit or past an edge.
Diffraction is a wave phenomenon where waves bend around obstacles or spread out after passing through an aperture. This effect is most noticeable when the wavelength of the wave is comparable to the size of the obstacle or aperture.
When a wave passes through a single narrow slit, it diffracts, producing a characteristic pattern of a wide central maximum and narrower, less intense secondary maxima and minima. The condition for dark fringes in single-slit diffraction is given by a sin θ = nλ, where 'a' is the slit width, 'n' is the order of the dark fringe, 'θ' is the angle, and 'λ' is the wavelength.
Condition for dark fringes in single-slit diffraction
n=1 for the first dark fringe. Approximation sin θ ≈ θ (in radians) can be used for small angles (<5°). 'a' is the width of the slit, 'n' is the order of the dark fringe, 'θ' is the angle of the dark fringe from the centre, and 'λ' is the wavelength of light.
Students often incorrectly apply the sin θ ≈ θ approximation for diffraction angles that are not small (e.g., for sound or water waves where wavelength is comparable to aperture width).
Diffraction grating — A plate on which there is a very large number of parallel, identical, very closely spaced slits.
A diffraction grating is an optical component used to separate light into its constituent wavelengths. It consists of many parallel slits, which produce a much sharper and more widely dispersed diffraction pattern than a single or double slit.
Zero-order maximum — The straight-on direction (n=0) in a diffraction grating pattern.
The zero-order maximum is the central bright fringe in a diffraction grating pattern. It corresponds to light that passes straight through the grating without any deviation, meaning the path difference is zero.
First-order diffraction maximum — The maximum observed when n=1 in the diffraction grating equation.
The first-order maximum is the bright fringe observed at the smallest non-zero angle from the central maximum. It corresponds to a path difference of one wavelength between light from adjacent slits.
Spectrometer — A piece of apparatus used to investigate spectra, often using a diffraction grating.
A spectrometer is an instrument designed to measure the properties of light over a specific portion of the electromagnetic spectrum. It commonly uses a diffraction grating to disperse light into its component wavelengths, allowing for analysis of spectral lines.
Diffraction gratings are used to produce spectra and determine wavelengths. When monochromatic light passes through a grating, distinct bright maxima are observed at specific angles. For white light, the grating separates the colours, producing a spectrum for each order. The diffraction grating equation relates the grating spacing, angle of diffraction, order of the maximum, and wavelength.
Diffraction grating equation
n=0 for the zero-order maximum (straight-on direction), n=1 for the first-order maximum, etc. 'd' is the spacing between adjacent slits (grating spacing), 'n' is the order of the diffraction maximum, 'θ' is the angle of the diffraction maximum, and 'λ' is the wavelength of monochromatic light.
In diffraction grating calculations (d sin θ = nλ), remember that 'n' is an integer representing the order of the maximum.
When describing experiments (e.g., with microwaves, strings, or air columns), clearly state the apparatus used and what is measured to determine wavelength.
Definitions Bank
Interference
where two or more waves meet or overlap to form a resultant wave. The resultant displacement at any point is the sum of displacements of the individual waves.
Principle of superposition
when two or more waves meet at a point, the resultant displacement at that point is equal to the sum of the displacements of the individual waves at that point.
Fringes
The maxima and minima disturbances produced by interference.
Interference pattern
The collection of fringes produced by the superposition of overlapping waves.
Coherent sources
Wave sources which maintain a constant phase difference.
+22 more definitions
View all →Command Word Guide
| Explain | For 'Explain the principle of superposition', state that the resultant displacement is the sum of individual displacements. For 'Explain the formation of a stationary wave', mention two waves of the same frequency and amplitude travelling in opposite directions superposing. |
| Show an understanding of | For experiments demonstrating stationary waves, describe the apparatus (e.g., microwave transmitter/receiver, stretched string with vibrator, resonance tube with tuning fork) and how nodes/antinodes are identified or measured to determine wavelength. |
| Recall and use | Be able to state the formulas (e.g., λ = ax/D, d sin θ = nλ) and correctly substitute values, ensuring consistent units. Identify variables from diagrams or descriptions. |
| Identify | For stationary waves, correctly label nodes and antinodes on a diagram. For interference patterns, identify bright/dark fringes or maxima/minima. |
Common Mistakes
Confusing the properties of stationary waves with progressive waves.
Remember that stationary waves have fixed nodes and antinodes and do not transfer energy, while progressive waves transfer energy and their waveform moves.
Assuming two separate light sources will produce an observable interference pattern.
Interference patterns require coherent sources, meaning they must maintain a constant phase difference. Independent light sources are incoherent.
Incorrectly applying the sin θ ≈ θ approximation for large diffraction angles.
The approximation is only valid for small angles (typically <5°). For larger angles, use the full sin θ value.
+2 more
View all →This chapter introduces the fundamental concepts of electricity, defining electric current as the flow of quantised charge carriers and relating it to their drift speed. It covers potential difference as energy transferred per unit charge and derives expressions for electrical power. The chapter also defines resistance and resistivity, exploring their dependence on material properties, dimensions, and temperature, and details the current-voltage characteristics of various components.
coulomb — The unit of charge (symbol C).
Charge is a fundamental property of matter, and the coulomb is its standard unit. It represents a specific quantity of electric charge.
quantised — Exists only in discrete amounts, integral multiples of the charge on an electron.
Electric charge is not continuous but comes in fixed, indivisible packets. This means any observable charge is always a whole number multiple of the elementary charge, which is the charge on a single electron.
ampere — The SI base unit of current (symbol A).
The ampere is the standard unit for measuring electric current, representing the rate at which electric charge flows past a point in a circuit.
conventional current — The flow in the circuit from the positive terminal of the battery or power supply to the negative, in the direction of flow of positive charge.
Historically, current was thought to be the flow of positive charge. Even though we now know that electrons (negative charges) are often the charge carriers in metals, conventional current is still defined as flowing from positive to negative.
Charge
This formula is used to calculate the total charge (Q) that flows through a point in a circuit over a given time (t) when a constant current (I) is present.
Current in a conductor
This expression relates the macroscopic current (I) in a conductor to the microscopic properties of its charge carriers, including their number density (n), cross-sectional area (A), average drift speed (v), and the charge on each carrier (q).
An electric current is fundamentally a flow of charge carriers. In metallic conductors, these charge carriers are typically free electrons. The charge on these carriers is quantised, meaning it exists only in discrete amounts, specifically integral multiples of the charge on an electron. The rate of flow of charge defines the electric current, measured in amperes.
Students often confuse conventional current (positive charge flow) with electron flow (negative charge flow) in metals. Remember that conventional current flows from the positive terminal to the negative terminal, even though electrons move in the opposite direction.
potential difference — The energy transferred per unit charge.
Potential difference, also known as voltage, quantifies the amount of energy converted from electrical to other forms (or vice versa) when a unit of charge moves between two points in a circuit. It is a measure of the 'push' or 'pull' on charges.
volt — The unit of potential difference, a joule coulomb−1 (symbol V).
The volt is the standard unit for potential difference, indicating that one joule of energy is transferred for every coulomb of charge that moves across that potential difference.
Potential difference
This formula defines potential difference (V) as the energy transferred or work done (W) per unit charge (Q). It is fundamental to understanding energy transformations in electrical circuits.
Energy transferred
This expression is used to calculate the total energy transferred (W) when a charge (Q) moves through a potential difference (V).
Electrical power is the rate at which electrical energy is transferred or converted. It is directly related to both the potential difference across a component and the current flowing through it. Power can be expressed in several forms, depending on the known quantities in a circuit.
Electrical power (general)
This is the general formula for electrical power (P), defining it as the product of the potential difference (V) across a component and the current (I) flowing through it.
Electrical power (using resistance and current)
This alternative expression for electrical power (P) is particularly useful when the current (I) and resistance (R) of a component are known, highlighting the power dissipated as heat.
Electrical power (using resistance and potential difference)
Another alternative expression for electrical power (P), useful when the potential difference (V) across a component and its resistance (R) are known.
When asked to 'define' a term like potential difference, use the word-for-word definition: 'energy transferred per unit charge'. An equation alone may not score full marks.
resistance — The ratio of the potential difference V across the conductor to the current I in it.
Resistance is a measure of how much a component opposes the flow of electric current. A higher resistance means that for a given potential difference, less current will flow.
ohm — The unit of resistance, a volt ampere−1 (symbol Ω).
The ohm is the standard unit for resistance. One ohm means that a potential difference of one volt will drive a current of one ampere through the component.
resistor — The general term for a device that has resistance.
A resistor is an electrical component designed to introduce a specific amount of resistance into a circuit, used to control current or voltage.
Resistance
This formula defines resistance (R) as the ratio of the potential difference (V) across a conductor to the current (I) flowing through it.
Ohm’s law — For a metallic conductor at constant temperature, the current in the conductor is proportional to the potential difference across it.
Ohm's law describes a specific relationship between current and potential difference for certain materials. It implies that the resistance of an ohmic conductor remains constant as long as its temperature does not change.
The current-voltage (I-V) characteristics of a component illustrate how the current through it varies with the potential difference across it. For a metallic conductor at constant temperature, the I-V graph is a straight line through the origin, demonstrating Ohm's law. However, not all components obey Ohm's law.
Students often assume resistance is constant for all components. Remember that filament lamps and semiconductor diodes do not obey Ohm's law, and their resistance changes with operating conditions.
The resistance of a filament lamp increases as the current increases. This is because a higher current leads to a higher temperature of the filament. The increased temperature causes the metal lattice ions to vibrate with greater amplitude, leading to more frequent collisions with the charge carriers and thus greater opposition to current flow.
For questions asking why a filament lamp's resistance increases, a full answer must link increased current to increased temperature, and then to more frequent collisions between electrons and vibrating lattice ions.
forward bias — The condition where a diode conducts, with the current in the direction of the arrowhead on the symbol, meaning the potential on the left-hand side is more positive than the potential on the right-hand side.
A semiconductor diode is a non-ohmic component that allows current to flow easily in one direction (forward bias) once a certain threshold potential difference is reached. In this state, the diode acts like a conductor.
reverse bias — The condition where the potential difference across a diode is reversed, and the diode does not conduct.
When a diode is in reverse bias, the potential difference across it is oriented in the opposite direction to forward bias. In this condition, the diode exhibits very high resistance and effectively blocks the flow of current.
Resistance is a property of a specific component, depending on its material, length, and cross-sectional area. Resistivity, however, is an intrinsic property of the material itself, independent of its dimensions. It quantifies how strongly a material opposes the flow of electric current.
resistivity — A constant for a particular material at a particular temperature, defined by ρ = RA/L.
Resistivity is a fundamental property of a material that indicates its ability to conduct electricity. It is constant for a given material at a specific temperature and is used to calculate the resistance of a component made from that material, given its dimensions.
Resistance and resistivity
This formula relates the resistance (R) of a component to its material's resistivity (\rho), its length (L), and its cross-sectional area (A).
Resistivity
This formula defines resistivity (\rho) based on the resistance (R) of a component, its cross-sectional area (A), and its length (L).
Students often forget to convert units in calculations, such as minutes to seconds, mm^2 to m^2, or kV to V. Always ensure all quantities are in SI units before performing calculations.
Students sometimes confuse resistivity (a material property, unit Ωm) with resistance (a component property, unit Ω). Remember that resistivity is intrinsic to the material, while resistance depends on the specific component's shape and size.
light-dependent resistor (LDR) — A component whose resistance decreases as the light intensity increases.
An LDR is a type of resistor whose resistance is sensitive to light. It is commonly used in light-sensing circuits, such as automatic streetlights or light meters.
thermistors — Negative temperature coefficient devices made from semiconductor material, whose resistance decreases significantly with rise in temperature.
Thermistors are temperature-sensitive resistors. Their resistance changes predictably with temperature, making them useful in temperature sensing and control applications, such as thermostats or fire alarms.
When sketching I–V characteristics, always label your axes correctly (I on y-axis, V on x-axis) and ensure the shape and origin point are accurate for each component.
For an I–V graph, the resistance at any point is R = V/I, which is the reciprocal of the gradient (1/gradient), not the gradient itself.
In I = Anvq calculations, 'q' is the elementary charge (1.6 x 10⁻¹⁹ C) and 'n' is the number of charge carriers per unit volume (m⁻³). Pay close attention to these specific values.
Definitions Bank
coulomb
The unit of charge (symbol C).
quantised
Exists only in discrete amounts, integral multiples of the charge on an electron.
ampere
The SI base unit of current (symbol A).
conventional current
The flow in the circuit from the positive terminal of the battery or power supply to the negative, in the direction of flow of positive charge.
potential difference
The energy transferred per unit charge.
+10 more definitions
View all →Command Word Guide
| Define | Provide the precise, word-for-word definition as given in the syllabus, e.g., for potential difference, 'energy transferred per unit charge'. An equation alone is insufficient. |
| Explain | Provide a reason or mechanism. For example, explaining why a filament lamp's resistance increases requires linking increased current to increased temperature, and then to more frequent collisions between electrons and vibrating lattice ions. |
| Sketch | Draw a graph with correctly labelled axes (I on y-axis, V on x-axis) and an accurate shape, including the origin point, for the specified component (e.g., metallic conductor, diode, filament lamp). |
| Calculate | Show all working steps, including formula used, substitution of values with units, and the final answer with correct units and appropriate significant figures. Remember to convert units to SI where necessary. |
Common Mistakes
Confusing conventional current with electron flow.
Conventional current is defined as the flow of positive charge from positive to negative terminals, even though electrons (negative charge) flow in the opposite direction in metals.
Using potential difference 'through' a device.
Potential difference is always measured 'across' the ends or terminals of a component, not 'through' it.
Assuming resistance is constant for all components.
Resistance is only constant for ohmic conductors at constant temperature. Filament lamps and semiconductor diodes have non-constant resistance and do not obey Ohm's law.
+2 more
View all →This chapter explores D.C. circuits, focusing on the fundamental concepts of electromotive force and potential difference, including the impact of internal resistance. It then introduces Kirchhoff's laws for circuit analysis and demonstrates their application in combining resistors. Finally, the chapter covers potential divider circuits and potentiometers for precise voltage control and e.m.f. comparison.
electromotive force — The energy transferred from other forms to electrical per unit charge in driving charge around a complete circuit.
Electromotive force (e.m.f.) measures the energy supplied by a power source, such as a battery, to each unit of charge as it moves through the source. This energy is converted from chemical or other forms into electrical energy, enabling the charge to circulate around the entire circuit.
e.m.f. — Short for electromotive force, which measures, in volts, the energy transferred per unit of charge that passes through the power supply.
e.m.f. is simply the abbreviation for electromotive force. It quantifies the energy provided by a power supply per unit charge, expressed in volts. This energy transfer is crucial for driving current through a complete circuit.
potential difference — The energy transferred from electrical to other forms per unit charge as it passes through a component.
Potential difference (p.d.) represents the energy converted from electrical energy into other forms, such as heat or light, per unit charge as it moves through a specific component in a circuit. It is a measure of the 'push' or 'pull' on charges between two points in a circuit, causing them to do work.
Students often confuse e.m.f. with potential difference, especially regarding energy considerations and internal resistance. Remember that e.m.f. is energy supplied by the source, while p.d. is energy dissipated across a component.
Electromotive force
Measures energy transferred per unit charge through a power supply.
Potential difference
Measures energy transferred per unit charge through a component.
While both e.m.f. and potential difference are measured in volts and represent energy per unit charge, they describe different energy transformations. e.m.f. is the energy converted into electrical form by a source, driving charge around a complete circuit. Potential difference, conversely, is the electrical energy converted into other forms (e.g., heat, light) as charge passes through a component. For example, a battery provides e.m.f., while a lamp has a potential difference across it as it converts electrical energy into light and heat.
internal resistance — Resistance between the terminals of a power supply.
All real power supplies possess some internal resistance, which is the resistance inherent within the source itself. This internal resistance causes some of the electrical energy generated by the source to be dissipated as heat within the source, rather than being delivered to the external circuit. Consequently, the terminal potential difference of a power supply will be less than its e.m.f. when a current is flowing.
terminal potential difference — The p.d. between the terminals of a cell or power supply when a current is being delivered.
The terminal potential difference is the actual voltage available across the external terminals of a power supply when it is actively supplying current to a circuit. Due to internal resistance, this value is typically lower than the e.m.f. of the supply. When no current is drawn, the terminal potential difference equals the e.m.f.
Terminal potential difference (with internal resistance)
Applies when the power supply is delivering a current.
EMF (with internal resistance and load)
Derived from conservation of energy, also written as E = I(R + r).
Terminal potential difference (graph equation)
Equation of the straight line when plotting terminal p.d. against current.
For calculations involving internal resistance, use the formula E = I(R + r) for the whole circuit and V = IR for the terminal p.d. across the external load R. To find internal resistance 'r' from a V-I graph, remember the equation is V = -rI + E, so the gradient is -r and the y-intercept is the e.m.f.
Don't assume the terminal p.d. of a battery is always equal to its e.m.f.; it only is when no current is drawn.
Kirchhoff's laws are fundamental principles for analysing complex D.C. circuits, based on the conservation of charge and energy. Kirchhoff's first law addresses current at junctions, while the second law deals with potential changes around closed loops. These laws allow for systematic calculation of currents and potential differences throughout a circuit.
Kirchhoff’s first law — The sum of the currents entering a junction in a circuit is always equal to the sum of the currents leaving the junction.
Kirchhoff's first law, also known as the junction rule, is a direct consequence of the conservation of charge. It states that no charge can accumulate at a junction; therefore, the total amount of charge flowing into a junction must equal the total amount of charge flowing out of it per unit time. This means the sum of currents entering a junction equals the sum of currents leaving it.
Kirchhoff's First Law (junction rule)
Sum of currents entering a junction equals sum of currents leaving.
Incorrectly applying Kirchhoff's first law by not conserving charge at junctions is a common error. Always ensure the sum of currents in equals the sum of currents out.
Kirchhoff’s second law — The sum of the electromotive forces in a closed circuit is equal to the sum of the potential differences.
Kirchhoff's second law, or the loop rule, is based on the conservation of energy. It states that for any closed loop in a circuit, the algebraic sum of the e.m.f.s must equal the algebraic sum of the potential differences across the components. This means that any energy supplied by sources in a loop must be dissipated by components within that same loop.
Kirchhoff's Second Law (loop rule)
Sum of e.m.f.s in a closed loop equals sum of p.d.s.
Incorrectly applying Kirchhoff's second law by not considering the direction of e.m.f.s or p.d.s in a loop can lead to errors. Pay attention to the polarity and direction of current flow.
Resistors can be connected in two fundamental configurations: series and parallel. The way resistors are combined affects the total equivalent resistance of the circuit, which in turn influences the total current drawn from the power supply. Understanding these combinations is crucial for simplifying complex circuits.
series circuit — A circuit in which the components are connected one after another, forming one complete loop.
In a series circuit, components are arranged sequentially, forming a single path for the current to flow. The current is the same through every component in a series circuit. The total resistance is the sum of individual resistances, and the total potential difference is divided among the components.
Resistors in series
Combined resistance is the sum of individual resistances.
parallel circuit — A circuit where the current can take alternative routes in different loops.
In a parallel circuit, components are connected across the same two points, providing multiple paths for the current. The potential difference across each parallel component is the same. The total current from the source splits among the parallel branches, and the equivalent resistance is less than the smallest individual resistance.
Resistors in parallel
Reciprocal of combined resistance is sum of reciprocals of individual resistances.
Forgetting to take the reciprocal when calculating equivalent resistance for parallel resistors is a common mistake. Always remember to invert the sum of the reciprocals to get the final equivalent resistance.
A potential divider circuit is used to obtain a desired fraction of a supply voltage. It typically consists of two or more resistors connected in series across a voltage source. The output voltage is taken across one of the resistors, allowing for a variable output voltage if one of the resistors is variable, such as a thermistor or LDR.
Potential divider (voltage ratio)
Ratio of voltages across two series resistors is same as ratio of their resistances.
Potential divider (output voltage across S)
Output voltage across the variable resistance S in a potential divider.
Potential divider (output voltage across R)
Output voltage across the fixed resistance R in a potential divider.
In potential divider questions, clearly identify which resistor the output voltage is measured across; the voltage ratio equals the resistance ratio (V₁/V₂ = R₁/R₂).
potentiometer — A continuously variable potential divider.
A potentiometer is essentially a potential divider with a sliding contact, allowing for continuous variation of the output voltage. It consists of a uniform resistance wire or track, and by moving a slider along it, the resistance of the output section can be smoothly adjusted. This makes it useful for controlling voltage or comparing e.m.f.s.
galvanometer — A sensitive current-measuring analogue meter, often with a centre-zero scale.
A galvanometer is a highly sensitive instrument used to detect and measure small electric currents. Its centre-zero scale allows it to indicate current flow in either direction. In potentiometer circuits, it is used to detect a 'null' or 'balance point' where no current flows.
balance point — The position of the sliding contact on a potentiometer wire where the galvanometer reads zero, indicating no current through the test cell.
The balance point in a potentiometer circuit is achieved when the potential difference across a section of the potentiometer wire exactly matches the e.m.f. of the test cell. At this point, no current flows through the galvanometer connected in series with the test cell, indicating a 'null' reading. This null method is advantageous as it measures the e.m.f. without any current being drawn from the cell, thus avoiding voltage drop due to internal resistance.
Potentiometer output voltage
Output voltage from a potentiometer, where R1 and R2 are parts of the total resistance.
Potentiometer (comparing e.m.f.s)
Used to compare unknown e.m.f. with a known e.m.f. using balance lengths.
Potentiometers are particularly useful for accurately comparing the e.m.f.s of two cells using a null method. By finding the balance point for each cell, where no current flows through the galvanometer, the ratio of their e.m.f.s can be determined from the ratio of the corresponding balance lengths. This method is advantageous because it measures the e.m.f. when the cell is on open circuit, avoiding any voltage drop due to internal resistance.
When defining e.m.f. or p.d., always refer to 'energy transferred per unit charge' to secure the marks.
When drawing circuit diagrams, use the correct, standard symbols and ensure your circuit forms a complete, logical loop.
Definitions Bank
electromotive force
The energy transferred from other forms to electrical per unit charge in driving charge around a complete circuit.
e.m.f.
The energy transferred from other forms to electrical per unit charge in driving charge around a complete circuit.
potential difference
The energy transferred from electrical to other forms per unit charge as it passes through a component.
internal resistance
Resistance between the terminals of a power supply.
terminal potential difference
The p.d. between the terminals of a cell or power supply when a current is being delivered.
+7 more definitions
View all →Command Word Guide
| Define | For 'e.m.f.' or 'potential difference', state 'energy transferred per unit charge' and specify the energy transformation (e.m.f.: other to electrical; p.d.: electrical to other). |
| Distinguish | Clearly state the difference between e.m.f. and p.d. in terms of energy transformation and location (source vs. component), and mention the role of internal resistance. |
| Explain | For Kirchhoff's laws, explain their basis in conservation principles (charge for first law, energy for second law) and how they apply to currents at junctions or voltages in loops. |
| Calculate | Show all steps, including formulas used, substitution of values, and correct units. For parallel resistors, remember to take the reciprocal at the end. |
+1 more
View all →Common Mistakes
Confusing e.m.f. with potential difference.
e.m.f. is energy supplied by the source per unit charge (chemical to electrical), while p.d. is energy dissipated across a component per unit charge (electrical to other forms).
Assuming e.m.f. is a force.
e.m.f. is energy per unit charge, not a force, and is measured in volts.
Incorrectly applying Kirchhoff's first law.
Always ensure the sum of currents entering a junction exactly equals the sum of currents leaving it, due to conservation of charge.
+3 more
View all →This chapter explores the nuclear model of the atom, detailing its composition and the processes of radioactive decay. It then delves into the fundamental particles that constitute matter, classifying them and explaining the quark model.
proton number — The number of protons in the nucleus of an atom is called the proton number (or atomic number) Z.
The proton number, denoted by Z, uniquely identifies an element. It represents the positive charge within the nucleus and determines the element's chemical properties.
nucleon number — The number of protons together with the number of neutrons in the nucleus is called the nucleon number (or mass number) A.
The nucleon number, A, represents the total count of particles within the nucleus, which are collectively known as nucleons. It is essentially the mass number of the atom.
nucleon — A nucleon is the name given to either a proton or a neutron in the nucleus.
Protons and neutrons are the constituent particles of an atomic nucleus. They are both affected by the strong nuclear force, which binds them together.
nuclide — A nuclide is the name given to a class of atoms whose nuclei contain a specified number of protons and a specified number of neutrons.
A nuclide specifies a particular type of atom based on its exact nuclear composition. This includes both the number of protons (Z) and neutrons, which together determine its nucleon number (A).
Nuclide representation
Used to represent a particular atom of an element (a nuclide). Here, A is the nucleon number, Z is the proton number, and X is the chemical symbol of the element.
isotopes — Isotopes are different forms of the same element which have the same number of protons but different numbers of neutrons in their nuclei.
Isotopes of an element share the same proton number (Z), meaning they are the same element, but they differ in their nucleon number (A) due to varying numbers of neutrons. This difference affects their mass but not their chemical identity.
Students often confuse nucleon number (A) and proton number (Z) or their roles in nuclear equations. Remember that A is the total number of protons and neutrons, while Z is only the number of protons.
When calculating the number of neutrons in a nuclide, always show the subtraction N = A - Z clearly.
The α-particle scattering experiment provided crucial evidence for the nuclear model of the atom. This experiment demonstrated that atoms consist mostly of empty space, with a very small, dense, positively charged nucleus at their center. Electrons orbit this nucleus, forming a simple model of the nuclear atom.
radioactive — Nuclei are said to be radioactive when they emit particles and/or electromagnetic radiation to become more stable.
Radioactive nuclei are inherently unstable and undergo a process called radioactivity to achieve a more stable configuration. This process involves the spontaneous emission of various forms of radiation.
radioactivity — The emission of particles and/or electromagnetic radiation from unstable nuclei.
Radioactivity is the phenomenon where unstable atomic nuclei spontaneously transform, releasing energy and matter in the form of particles or electromagnetic waves. This process is driven by the nucleus seeking a more stable state.
parent nuclide — The original nuclide in a radioactive decay process.
In any radioactive decay, the initial unstable nucleus is referred to as the parent nuclide. It undergoes transformation to produce a new nucleus.
daughter nuclide — The new nuclide formed after a radioactive decay process.
Following radioactive decay, the resulting nucleus is called the daughter nuclide. This daughter nuclide may itself be stable or undergo further decay.
α-particle — An α-particle consists of two protons and two neutrons and hence has a charge of +2e. An α-particle is identical to the nucleus of a helium atom.
Alpha particles are relatively heavy and carry a positive charge. Due to their composition, they are highly ionising, meaning they can dislodge electrons from atoms they interact with, forming ion pairs.
ion pair — An ionised atom and the dislodged electron, produced when an α-particle interacts with nearby atoms.
When an alpha particle passes through matter, its strong positive charge attracts electrons from atoms, pulling them away. This creates a positively charged ion and a free electron, forming an ion pair.
Alpha decay equation (general form)
Represents the emission of an alpha particle from a parent nuclide. In this process, the nucleon number (A) decreases by 4 and the proton number (Z) decreases by 2.
β-particles — Fast moving electrons, β−, or positrons, β+.
Beta particles are much lighter than alpha particles and carry either a negative or positive charge. They are emitted during beta decay processes, which involve changes within the nucleus.
positron — A positive electron (β+) also known as an antielectron, which is the antiparticle of an electron.
A positron is the antimatter counterpart of an electron. It has the same mass as an electron but carries an opposite, positive charge. Positrons are emitted during beta-plus decay.
antiparticle — An antiparticle has the same mass but opposite charge to the corresponding particle.
For every particle, there exists an antiparticle with identical mass but opposite charge and other quantum numbers. When a particle and its antiparticle meet, they can annihilate each other.
Misidentifying the antiparticle of an electron as having different mass or the same charge is a common error. Remember, an antiparticle has the same mass but opposite charge to its corresponding particle.
antineutrino — The antimatter equivalent of the neutrino, emitted with a β− particle.
An antineutrino is a fundamental particle with no electrical charge and very little or no mass. It is emitted alongside a beta-minus particle during beta-minus decay to conserve energy and lepton number.
neutrino — A particle with no electrical charge and little or no mass, emitted with a β+ particle.
A neutrino is a fundamental particle, similar to an antineutrino but its matter counterpart. It is emitted with a beta-plus particle during beta-plus decay, also ensuring conservation laws are upheld.
Beta-minus decay equation (general form)
Represents the emission of a beta-minus particle (electron) and an antineutrino from a parent nuclide. The nucleon number (A) remains unchanged, while the proton number (Z) increases by 1.
Beta-plus decay equation (general form)
Represents the emission of a beta-plus particle (positron) and a neutrino from a parent nuclide. The nucleon number (A) remains unchanged, while the proton number (Z) decreases by 1.
Incorrectly applying conservation laws (nucleon number, charge) in nuclear reactions is a common mistake. Always ensure both the nucleon number (A) and proton number (Z) are balanced on both sides of the equation.
When writing nuclear equations, always show the full balanced equation, including the correct symbols for all particles like 4 2 He for alpha and 0 -1 e for beta-minus.
Not understanding that β-particles have a continuous range of energies due to neutrino/antineutrino emission, unlike discrete α-particle energies, is a key misconception. The neutrino/antineutrino carries away varying amounts of energy.
γ-radiation — Part of the electromagnetic spectrum with wavelengths between 10−11 m and 10−13 m, emitted by excited nuclei.
Gamma radiation consists of high-energy photons, meaning it is electromagnetic radiation rather than particles. It is emitted when an excited nucleus transitions to a lower energy state, similar to how electrons emit photons when changing energy levels.
Gamma emission equation (general form)
Represents the emission of a gamma photon from an excited nucleus (X*) to return to its ground state (X). Neither the nucleon number (A) nor the proton number (Z) changes during gamma emission.
radioactive decay series — A sequence of radioactive decay from parent nuclide through succeeding daughter nuclides, ending when a stable nuclide is reached.
Many heavy, unstable nuclei undergo a series of alpha and beta decays until a stable, non-radioactive nuclide is formed. This chain of transformations is known as a radioactive decay series.
In all nuclear processes, fundamental conservation laws must be upheld. Specifically, both the nucleon number (A) and the charge (derived from the proton number Z) are conserved. This means the sum of A and Z values on the reactant side must equal the sum on the product side of a nuclear equation.
fundamental particle — A particle that is not formed from other particles.
Fundamental particles are considered the most basic building blocks of matter and energy. They are not composed of any smaller, more elementary constituents.
strong nuclear force — The force that holds the nucleons in the nucleus together, acting on protons and neutrons but not on electrons, and is very short range.
The strong nuclear force is one of the four fundamental forces of nature. It is responsible for binding protons and neutrons within the atomic nucleus, overcoming the electrostatic repulsion between positively charged protons. Its extremely short range means it only acts over nuclear distances.
hadrons — Subatomic particles affected by the strong force, for example protons and neutrons.
Hadrons are composite particles that interact via the strong nuclear force. Protons and neutrons are examples of hadrons, meaning they are not fundamental particles themselves but are made up of smaller constituents.
leptons — Subatomic particles not affected by the strong force, for example electrons and positrons.
Leptons are a class of fundamental particles that do not experience the strong nuclear force. Electrons, positrons, neutrinos, and antineutrinos are all examples of leptons.
Confusing hadrons and leptons, or their interaction with the strong force, is a common error. Remember, hadrons (like protons and neutrons) are affected by the strong force, while leptons (like electrons and neutrinos) are not.
quarks — Fundamental particles that make up hadrons.
Quarks are fundamental particles that combine to form hadrons. There are six 'flavours' or types of quarks: up, down, strange, charm, top, and bottom. They carry fractional electric charges.
baryon — A type of hadron made up of three quarks or three antiquarks.
Baryons are a specific category of hadrons composed of three quarks (or three antiquarks). Protons and neutrons are the most common examples of baryons.
meson — A type of hadron made up of a quark and an antiquark.
Mesons are another type of hadron, but unlike baryons, they are composed of a quark and an antiquark pair. They are generally unstable and have shorter lifetimes.
The quark model explains the composition of hadrons. Protons are composed of two up quarks and one down quark (uud), while neutrons are composed of one up quark and two down quarks (udd). Quarks are fundamental particles, meaning they are not made of smaller constituents.
Beta decay can be understood at the fundamental level through quark transformations. In beta-minus decay, a down quark (charge -1/3e) within a neutron changes into an up quark (charge +2/3e), converting the neutron into a proton. This process also emits an electron (beta-minus particle) and an antineutrino. Conversely, in beta-plus decay, an up quark changes into a down quark, converting a proton into a neutron, and emitting a positron and a neutrino.
Neutron decay (free neutron)
Describes the transformation of a free neutron into a proton, electron (beta-minus particle), and antineutrino. This is a fundamental process underlying beta-minus decay.
Proton decay (in nucleus)
Describes the transformation of a proton in the nucleus into a neutron, positron (beta-plus particle), and neutrino. This process is responsible for beta-plus decay.
Incorrectly stating the quark composition of protons and neutrons, or the quark changes during beta decay, is a common error. Remember a proton is 'uud' and a neutron is 'udd'. Beta decay involves a quark changing flavour (u → d or d → u).
For questions on beta decay at a fundamental level, state the specific quark change (e.g., 'a down quark changes to an up quark') and show how charge is conserved.
For 'describe' questions on α-particle scattering, state the setup, observations (most pass through, some deflect, few rebound), and conclusions (small, dense, positive nucleus).
To 'distinguish' between isotopes, explicitly state what is the same (proton number) and what is different (neutron number).
Definitions Bank
proton number
The number of protons in the nucleus of an atom is called the proton number (or atomic number) Z.
nucleon number
The number of protons together with the number of neutrons in the nucleus is called the nucleon number (or mass number) A.
nucleon
A nucleon is the name given to either a proton or a neutron in the nucleus.
nuclide
A nuclide is the name given to a class of atoms whose nuclei contain a specified number of protons and a specified number of neutrons.
isotopes
Isotopes are different forms of the same element which have the same number of protons but different numbers of neutrons in their nuclei.
+20 more definitions
View all →Command Word Guide
| Infer | For the α-particle scattering experiment, inferring requires stating the observations and then directly linking them to the conclusions about the nucleus (existence, small size, positive charge). |
| Describe | When describing the nuclear atom or radioactive decay, provide details on composition (protons, neutrons, electrons), properties (mass, charge of radiations), and the process itself (e.g., changes in A and Z). |
| Distinguish | To distinguish between terms like nucleon number and proton number, or isotopes, clearly state what is the same and what is different for each term. |
| Understand and use | For nuclide notation and conservation laws, demonstrate understanding by correctly applying them in nuclear equations, balancing A and Z, and identifying particles. |
+1 more
View all →Common Mistakes
Confusing nucleon number (A) and proton number (Z) or their roles in nuclear equations.
Remember that A is the total number of protons and neutrons (mass number), while Z is only the number of protons (atomic number).
Incorrectly applying conservation laws (nucleon number, charge) in nuclear reactions.
Always ensure that the sum of nucleon numbers (A) and proton numbers (Z) are balanced on both sides of any nuclear equation.
Not understanding that β-particles have a continuous range of energies due to neutrino/antineutrino emission, unlike discrete α-particle energies.
The neutrino or antineutrino emitted during beta decay carries away a variable amount of energy, leading to a continuous energy spectrum for the beta particle.
+3 more
View all →This chapter introduces the fundamental concepts of circular motion, including radian measure, angular speed, centripetal acceleration, and centripetal force. It explains how these quantities are related and how various physical forces provide the necessary centripetal force in different real-world scenarios.
radian (rad) — The angle subtended at the centre of a circle by an arc equal in length to the radius of the circle.
The radian is a unit of angular measure. When the length of an arc along the circumference of a circle is exactly equal to the radius of that circle, the angle formed at the centre by this arc is defined as one radian. This provides a natural and convenient way to express angular displacement.
angular speed — The angle swept out by the radius of the circle per unit time.
Angular speed quantifies how quickly an object rotates or revolves around a central point. It measures the rate at which the radius of the circular path sweeps through an angle. For example, a faster spinning object will have a higher angular speed.
angular velocity — The angular speed in a given direction (for example, clockwise).
Angular velocity is a vector quantity that includes both the angular speed and the direction of rotation. While angular speed tells us how fast an object is rotating, angular velocity specifies this rate along with whether it's rotating clockwise or anti-clockwise.
centripetal acceleration — The acceleration towards the centre of the circle for an object travelling in a circle.
Even if an object moves at a constant speed in a circle, its velocity is continuously changing because its direction is always changing. This change in velocity means there must be an acceleration, which is always directed towards the centre of the circle. This inward acceleration is known as centripetal acceleration.
centripetal force — The resultant force acting towards the centre of the circle, required to make an object accelerate in circular motion.
According to Newton's second law, an acceleration requires a resultant force. For an object to undergo centripetal acceleration and thus move in a circle, there must be a resultant force acting towards the centre of the circle. This force is not a new type of force but is provided by existing physical forces like tension, gravity, friction, or the normal force.
Angle in radians
Used to define and calculate angles in radians.
Relationship between linear and angular speed
Applies to an object moving at constant speed in a circle.
Angular speed for one revolution
Relates angular speed to the period of circular motion.
Angular displacement is the angle swept out by a radius of a circle. It is commonly measured in radians, where one radian is defined as the angle subtended at the centre of a circle by an arc equal in length to the radius. This unit simplifies calculations involving circular motion, as a full circle corresponds to 2π radians.
Students often confuse linear velocity with linear speed. Remember that velocity changes even if speed is constant in circular motion because the direction of motion is continuously altering.
For an object moving in a circle, its linear speed (v) along the circumference is directly related to its angular speed (ω) and the radius (r) of the circle. The relationship is given by v = rω. This means that for a given angular speed, a larger radius results in a greater linear speed.
An object moving in a circle, even at a constant linear speed, is continuously accelerating. This is because its velocity vector is constantly changing direction. This acceleration, known as centripetal acceleration, is always directed towards the centre of the circle and is perpendicular to the instantaneous velocity of the object. It can be calculated using a = v²/r or a = rω².
Centripetal acceleration (linear speed)
Acceleration is directed towards the centre of the circle.
Centripetal acceleration (angular speed)
Acceleration is directed towards the centre of the circle.
Students often believe that centripetal force is an outward force (centrifugal force). Remember that centripetal force is always an inward resultant force, directed towards the centre of the circle.
For an object to experience centripetal acceleration, a resultant force, known as the centripetal force, must act on it. This force is always directed towards the centre of the circle and is perpendicular to the object's instantaneous velocity. It is not a fundamental force itself but is provided by other physical forces such as tension, gravity, friction, or the normal force. The magnitude of the centripetal force is given by F = mv²/r or F = mrω².
Centripetal force (linear speed)
Force is directed towards the centre of the circle.
Centripetal force (angular speed)
Force is directed towards the centre of the circle.
Students often fail to identify the specific physical force (e.g., tension, gravity, friction, normal force) that provides the centripetal force in different scenarios. Always explicitly state which force is acting as the centripetal force.
Always start by drawing a clear free-body diagram, resolving forces radially towards the centre of the circle.
Circular motion is observed in many contexts. For a satellite in Earth orbit, gravity provides the centripetal force. For a car travelling on a curved path, friction between the tyres and the road provides the necessary centripetal force. On a banked track, the horizontal component of the normal reaction force contributes to the centripetal force, allowing for higher speeds without skidding. Aircraft banking also uses a component of the lift force to provide centripetal force for turning.
When an object moves in a vertical circle, the centripetal force required varies depending on its position due to the influence of gravity. At the bottom of the circle, the normal contact force or tension must be greater than the weight to provide the net upward centripetal force. At the top, both the normal contact force (or tension) and weight contribute to the downward centripetal force. The minimum speed at the top for the object to complete the circle occurs when the normal contact force or tension becomes zero, with weight alone providing the centripetal force.
Centripetal force at bottom of vertical circle
Applies when an object is at the lowest point of a vertical circular path.
Centripetal force at top of vertical circle
Applies when an object is at the highest point of a vertical circular path.
Students often incorrectly apply force equations in vertical circular motion, especially at the top and bottom of the circle where weight acts differently relative to the centripetal force. Remember to consider the direction of weight in relation to the centre of the circle.
In 'explain' questions, explicitly identify the physical force(s) providing the centripetal force (e.g., tension in a string, gravitational force on a satellite, friction on a cornering car).
Set up your main equation by equating the resultant force towards the centre with either mv²/r or mrω².
Choose the correct formula based on the question's variables: use ω-based formulas (rω², mrω²) if given period (T) or frequency, and v-based formulas (v²/r, mv²/r) if given linear speed.
For questions on banked tracks, remember that the centripetal force is provided by the horizontal component of the normal reaction force.
Ensure all calculations use radians for angles and angular speed (rad/s), not degrees, to avoid common errors in circular motion problems.
Definitions Bank
radian (rad)
The angle subtended at the centre of a circle by an arc equal in length to the radius of the circle.
angular speed
The angle swept out by the radius of the circle per unit time.
angular velocity
The angular speed in a given direction (for example, clockwise).
centripetal acceleration
The acceleration towards the centre of the circle for an object travelling in a circle.
centripetal force
The resultant force acting towards the centre of the circle, required to make an object accelerate in circular motion.
Command Word Guide
| Explain | When asked to explain circular motion or centripetal force, you must explicitly identify the specific physical force (e.g., tension, gravity, friction, normal force) that provides the centripetal force in the given scenario. |
| Calculate | Ensure you use the correct units (radians for angles and angular speed) and select the appropriate formula (v-based or ω-based) depending on the given variables. |
Common Mistakes
Confusing linear velocity with linear speed.
Velocity is a vector quantity, and its direction changes continuously in circular motion, even if the speed is constant. Therefore, there is always an acceleration.
Believing that centripetal force is an outward force (centrifugal force).
The resultant force causing circular motion is always centripetal, meaning it is directed inwards, towards the centre of the circle.
Not understanding that centripetal force is always perpendicular to the instantaneous velocity.
The centripetal force acts radially inwards, while the velocity is tangential to the circle, making them perpendicular.
+2 more
View all →This chapter explores gravitational fields, defining them as regions where masses experience force and quantifying this with gravitational field strength. It covers Newton's law of gravitation, its application to circular orbits including geostationary satellites, and introduces the concepts of gravitational potential and potential energy, emphasizing their negative values.
gravitational field — A region of space where a mass experiences a force.
A gravitational field is an example of a field of force, meaning it is a region where objects with mass will experience a force due to the presence of other masses. This field extends infinitely, though its strength diminishes with distance.
gravitational field strength — The force per unit mass acting on a small mass placed at that point.
Gravitational field strength, denoted by 'g', quantifies the intensity of a gravitational field. It is defined as the gravitational force experienced by a small test mass divided by that mass. It also represents the acceleration of free fall at that point.
Gravitational Field Strength (general)
This formula defines gravitational field strength as the force per unit mass and also represents the acceleration of free fall.
gravitational field line — The direction of the gravitational force acting on a point mass.
Gravitational field lines are used to visually represent a gravitational field. They indicate the direction a small test mass would experience a force, always pointing towards the mass creating the field. The density of these lines indicates the strength of the field.
When drawing field lines for a spherical mass, ensure they are radial, point inwards, are perpendicular to the surface, and become more spread out with distance.
Newton’s law of gravitation — States that two point masses attract each other with a force that is proportional to the product of their masses and inversely proportional to the square of their separation.
Newton's law of gravitation describes the attractive force between any two point masses. This force is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centres. This law is fundamental to understanding gravitational interactions.
Newton's Law of Gravitation
This law applies to two point masses or uniform spheres, where 'r' is the distance between their centres.
gravitational constant — The constant of proportionality G in Newton's law of gravitation.
The gravitational constant, G, is a universal constant that determines the strength of the gravitational force. Its value is approximately 6.67 × 10^-11 N m^2 kg^-2, and it is essential for calculating gravitational forces.
Students often incorrectly apply Newton's law of gravitation. Remember that the law applies to point masses or uniform spheres, with 'r' being the separation of their centres, not just their surfaces.
The gravitational field strength 'g' at a distance 'r' from a point mass 'M' can be derived by equating Newton's law of gravitation (F = GMm/r²) with the general definition of gravitational field strength (F = mg). This derivation leads to a specific formula for 'g' that shows its inverse square relationship with distance.
Gravitational Field Strength (point mass)
This formula is used for the gravitational field strength of a point mass or outside a uniform sphere, where 'r' is the distance from the centre of mass.
Be prepared to derive g = GM/r² by setting Newton's universal law of gravitation (F = GMm/r²) equal to the formula for weight (F = mg).
weight — The force acting on a mass in a gravitational field, equal to the product of mass and gravitational field strength.
Weight is the force experienced by an object due to gravity. It is calculated as the product of the object's mass and the gravitational field strength at its location. Unlike mass, which is an invariant property, weight changes depending on the local gravitational field.
Weight
Weight depends on gravitational field strength, while mass is an invariant property of an object.
Students often confuse mass and weight. Remember that mass is invariant, a scalar property of matter measured in kg, while weight is the gravitational force on a mass, measured in N, and depends on gravitational field strength.
Avoid assuming constant 'g' over large distances. 'g' is only approximately constant for small changes in height near the Earth's surface; it decreases with the square of the distance from the center of the mass.
Satellites and planets often describe circular orbits around a central body. In such cases, the gravitational force between the orbiting body and the central body provides the necessary centripetal force to maintain the circular motion. By equating these two forces, relationships between orbital speed, radius, and period can be derived.
For questions on circular orbits, always start by equating the gravitational force to the centripetal force: GMm/r² = mv²/r or GMm/r² = mω²r.
Kepler’s third law of planetary motion — For planets or satellites describing circular orbits about the same central body, the square of the period is proportional to the cube of the radius of the orbit.
Kepler's third law describes a fundamental relationship for objects orbiting a common central body. It states that the square of the orbital period (T) is directly proportional to the cube of the orbital radius (r). This law can be derived from Newton's law of gravitation and the centripetal force equation.
Kepler's Third Law (derived)
This derived form of Kepler's Third Law applies to circular orbits about the same central body, where M is the mass of the central body.
geostationary orbit — Equatorial orbits with exactly the same period of rotation as the Earth (24 hours), and move in the same direction as the Earth (west to east) so that they are always above the same point on the Equator.
A geostationary orbit is a specific type of circular orbit around the Earth. Satellites in this orbit have a period of 24 hours, orbit from west to east, and are positioned directly above the Equator. This allows them to remain stationary relative to a point on the Earth's surface.
geostationary satellites — Satellites in geostationary orbits.
Geostationary satellites are crucial for communication and weather monitoring as their fixed position relative to the Earth's surface allows for continuous coverage of a specific region. Their unique orbital characteristics are a direct consequence of gravitational principles.
apparent weightlessness — The situation experienced in free fall, where objects appear weightless, as opposed to true weightlessness in the absence of a gravitational field.
Apparent weightlessness occurs when an object is in continuous free fall, such as astronauts in orbit. They appear weightless because they are constantly accelerating towards the Earth at the same rate as their spacecraft, not because there is no gravitational field acting on them.
Students often assume true weightlessness in orbit. Objects in orbit experience apparent weightlessness due to continuous free fall, not the absence of a gravitational field. They are still very much under the influence of gravity.
Gravitational potential — At a point in a gravitational field is defined as the work done per unit mass in bringing a small test mass from infinity to the point.
Gravitational potential, denoted by Φ, is a scalar quantity that describes the potential energy per unit mass at a given point in a gravitational field. It is defined with respect to infinity, where the potential is considered zero. Due to the attractive nature of gravity, gravitational potential values are always negative.
Gravitational Potential
Gravitational potential at infinity is defined as zero. The potential is negative due to the attractive nature of the gravitational force, meaning work is done by the field as a mass moves from infinity.
Students often misinterpret the negative sign in gravitational potential/energy. The negative sign indicates that work is done by the field as a mass moves from infinity, and the potential decreases as the mass gets closer to the field-producing body.
When defining 'gravitational potential', you must include the key phrases 'work done per unit mass' and 'bringing a mass from infinity' to score full marks.
Gravitational potential energy (Ep) is the energy an object possesses due to its position in a gravitational field. It is directly related to gravitational potential, being the product of the mass of the object and the gravitational potential at its location. Like gravitational potential, it is defined as zero at infinity and is always negative for attractive gravitational fields.
Gravitational Potential Energy
This formula is for two isolated point masses or an object in the field of a point mass. The negative sign indicates that the system is bound and work must be done to separate the masses.
In calculations, always convert distances to metres (e.g., from km) and time periods to seconds before substituting them into equations.
Always double-check units and significant figures in your final answers. Ensure you understand the physical meaning of negative signs in potential and potential energy calculations.
Definitions Bank
gravitational field
A region of space where a mass experiences a force.
gravitational field strength
The force per unit mass acting on a small mass placed at that point.
gravitational field line
The direction of the gravitational force acting on a point mass.
Newton’s law of gravitation
States that two point masses attract each other with a force that is proportional to the product of their masses and inversely proportional to the square of their separation.
gravitational constant
The constant of proportionality G in Newton's law of gravitation.
+6 more definitions
View all →Common Mistakes
Confusing mass and weight.
Mass is an invariant scalar property of matter (in kg), while weight is the gravitational force on a mass (in N) and depends on gravitational field strength.
Assuming true weightlessness in orbit.
Objects in orbit experience 'apparent weightlessness' due to continuous free fall, not the absence of a gravitational field.
Incorrectly applying Newton's law of gravitation.
The law applies to point masses or uniform spheres, with 'r' being the separation of their centres, not just their surfaces.
+2 more
View all →This chapter introduces temperature and thermal equilibrium, explaining how thermal energy transfers from hotter to colder regions. It covers various temperature measurement methods, defines the Kelvin and Celsius scales, and introduces absolute zero. The chapter also details specific heat capacity and specific latent heat for calculating thermal energy changes during temperature variations and phase transitions.
Thermal equilibrium — When different regions in thermal contact are at the same temperature, they are said to be in thermal equilibrium.
Thermal energy is always transferred from a region of higher temperature to a region of lower temperature. When two regions in thermal contact reach the same temperature, there is no net flow of thermal energy between them, and they are said to be in thermal equilibrium.
Thermometer — An instrument for measuring temperature.
A thermometer is a device designed to quantify the degree of 'hotness' of a body. Its operation relies on a physical property that changes predictably with temperature.
Thermometric property — The physical property on which a particular thermometer is based.
For a thermometer to function, it must utilise a physical property of a substance that varies consistently with temperature. Examples include the expansion of a liquid, the resistance of a wire, or the pressure of a gas at constant volume.
Thermometric substance — The working material of the thermometer, the property of which varies with temperature.
This is the specific material within a thermometer whose thermometric property is observed. For instance, in a liquid-in-glass thermometer, the liquid (e.g., mercury or alcohol) is the thermometric substance, and its volume is the thermometric property.
Fixed points — Reference temperatures defined by the fact that substances change state at fixed temperatures.
Fixed points are crucial for establishing temperature scales. These are reproducible temperatures at which substances undergo phase changes, such as the melting point of ice (ice point) or the boiling point of water (steam point) at standard atmospheric pressure.
Ice point — The melting point of ice.
The ice point is a standard fixed point used in temperature scale calibration. It refers to the temperature at which pure ice melts under normal atmospheric pressure.
Steam point — The temperature of steam above water boiling at normal atmospheric pressure.
The steam point is another critical fixed point, representing the temperature at which pure water boils and turns into steam under normal atmospheric pressure. These two points historically defined the 0°C and 100°C marks on the Celsius scale.
Temperature is a measure of the degree of 'hotness' of a body, distinct from the total amount of thermal energy it contains. Thermal energy naturally transfers from a region of higher temperature to a region of lower temperature. This transfer continues until all regions in thermal contact reach the same temperature, at which point they are in thermal equilibrium, and there is no net flow of thermal energy.
Students often confuse temperature with thermal energy. Remember that temperature measures the degree of 'hotness' of a body, not the amount of thermal energy (heat energy).
Empirical scale of temperature — A temperature scale set up by taking the value of a thermometric property at two fixed points and dividing the range of values into a number of equal steps.
Empirical scales are constructed by assigning specific values to two fixed points, such as the ice point and steam point, and then linearly interpolating between them. The Celsius scale is an example of an empirical scale, initially defined by these two points.
Empirical centigrade temperature
This formula is used for an empirical centigrade scale of a particular thermometer, where P_i is the thermometric property value at the ice-point and P_s is the value at the steam-point.
Different thermometric substances and properties lead to different empirical centigrade scales, agreeing only at fixed points, due to non-linear variation with temperature. This means an empirical scale is dependent on the specific substance used.
Absolute zero — The lowest theoretically possible temperature, which is –273.15 degrees Celsius or zero kelvin.
Absolute zero represents the theoretical point at which a substance has minimum possible thermal energy. It is the fundamental lower limit of temperature, unattainable in practice but crucial for defining the thermodynamic temperature scale.
Kelvin — The unit of thermodynamic temperature, defined as the fraction 1/273.16 of the thermodynamic temperature of the triple point of water.
The Kelvin is the SI unit for thermodynamic temperature. This scale is absolute, meaning zero Kelvin corresponds to absolute zero, and it does not depend on the properties of any particular substance, making it a fundamental scientific scale.
Unlike empirical scales, the scale of thermodynamic temperature, measured in Kelvin (K), does not depend on the property of any particular substance. It is an absolute scale with its zero point, absolute zero, defined as the lowest theoretically possible temperature, which is -273.15 °C. The Kelvin scale is defined using the triple point of water as a single fixed point.
Thermodynamic temperature (constant-volume gas thermometer)
This formula is based on the triple point of water as a fixed point, where p is the pressure reading at the unknown temperature T, and p_tr is the pressure reading at the triple point of water.
Celsius to Kelvin conversion
This formula relates Celsius temperature (θ) to thermodynamic (Kelvin) temperature (T).
The degree sign (°) is always used with Celsius (°C) but never with Kelvin (K). For example, it's 20 °C but 293 K.
When converting between Celsius and Kelvin, use the full value T/K = θ/°C + 273.15 unless the question specifies otherwise.
Specific heat capacity — The thermal energy per unit mass required to raise the temperature of the substance by one degree.
Specific heat capacity (c) quantifies how much thermal energy a substance can store for a given temperature change. A substance with a high specific heat capacity requires more energy to change its temperature than one with a low specific heat capacity, for the same mass and temperature change.
Thermal energy and specific heat capacity
This formula is used to calculate the thermal energy (ΔQ) supplied to a substance of mass (m) and specific heat capacity (c) to achieve a temperature rise (Δθ).
Specific heat capacity is approximately constant for small temperature changes but can vary considerably over a wide range.
When defining specific heat capacity, you must state it is energy 'per unit mass' and 'per unit temperature change' for full marks.
Specific latent heat — The quantity of heat per unit mass required to change the state of a substance at constant temperature.
Specific latent heat (L) is the energy absorbed or released during a phase change (e.g., melting, boiling) without any change in temperature. This energy is used to break or form intermolecular bonds, rather than increasing kinetic energy of molecules.
Latent heat of fusion — The thermal energy required to melt (fuse) a solid without any change of temperature.
Latent heat of fusion refers to the total thermal energy needed to convert a given mass of solid into liquid at its melting point. During this process, the temperature remains constant as the energy is used to overcome the forces holding the particles in a fixed lattice structure.
Specific latent heat of fusion — The quantity of thermal energy required to convert unit mass of solid to liquid without any change in temperature.
Specific latent heat of fusion (L_f) is a material property that quantifies the energy needed per kilogram to melt a substance. This energy is absorbed by the substance to change its state from solid to liquid at its melting point, without any temperature increase.
Thermal energy and specific latent heat of fusion
This formula is used to calculate the thermal energy (ΔQ) required for melting or freezing a substance of mass (m) at a constant temperature, where L_f is the specific latent heat of fusion.
Latent heat of vaporisation — The latent heat required to vaporise a liquid without any change of temperature.
Latent heat of vaporisation is the total thermal energy needed to convert a given mass of liquid into gas at its boiling point. The temperature remains constant as the energy is used to completely separate the molecules and do work against the atmosphere.
Specific latent heat of vaporisation — The quantity of thermal energy required to convert unit mass of liquid to vapour without any change in temperature.
Specific latent heat of vaporisation (L_v) is the energy needed per kilogram to vaporise a substance. This energy is significantly greater than the specific latent heat of fusion because more energy is required to completely separate molecules and do work against the atmosphere during vaporisation.
Thermal energy and specific latent heat of vaporisation
This formula is used to calculate the thermal energy (ΔQ) required for vaporisation or condensation of a substance of mass (m) at a constant temperature, where L_v is the specific latent heat of vaporisation.
Remember that temperature remains constant during a phase change (melting or boiling). All energy supplied during this time goes into changing the state, not increasing temperature.
Don't forget that the specific latent heat of vaporisation is significantly greater than the specific latent heat of fusion for the same substance because more energy is required to completely separate molecules and do work against the atmosphere during vaporisation.
For definitions of specific latent heat, you must mention 'per unit mass' and 'without any change in temperature' to score.
When substances at different temperatures are mixed or interact, thermal energy is exchanged until thermal equilibrium is reached. The principle of conservation of energy dictates that the heat energy lost by the hotter substance equals the heat energy gained by the colder substance. This principle is applied in calculations involving temperature changes and phase changes, often requiring the use of specific heat capacity and specific latent heat formulas.
In thermal exchange calculations (e.g., mixing liquids), always start by stating the principle of conservation of energy: 'heat lost by hotter body = heat gained by colder body'.
For calculations involving heating a substance through a phase change (e.g., ice to steam), calculate the energy for each stage separately (heating solid, melting, heating liquid, boiling) before summing them.
Definitions Bank
Thermal equilibrium
When different regions in thermal contact are at the same temperature, they are said to be in thermal equilibrium.
Thermometer
An instrument for measuring temperature.
Thermometric property
The physical property on which a particular thermometer is based.
Thermometric substance
The working material of the thermometer, the property of which varies with temperature.
Fixed points
Reference temperatures defined by the fact that substances change state at fixed temperatures.
+11 more definitions
View all →Common Mistakes
Confusing temperature with thermal energy.
Temperature measures the degree of 'hotness' of a body, not the amount of thermal energy (heat energy).
Assuming all empirical temperature scales are identical.
Different thermometric substances and properties lead to different empirical centigrade scales, agreeing only at fixed points, due to non-linear variation with temperature.
Assuming specific heat capacity is always constant.
Specific heat capacity is approximately constant for small temperature changes but can vary considerably over a wide range.
+4 more
View all →This chapter introduces the mole as an SI base quantity and its relation to the Avogadro constant. It then details the ideal gas equation of state, derived from empirical gas laws, defining an ideal gas as one that obeys this relationship. Finally, the kinetic theory of gases is presented, linking macroscopic gas properties to microscopic molecular motion and establishing the direct proportionality between average translational kinetic energy and thermodynamic temperature.
mole — The amount of substance which contains 6.02214076 × 10^23 elementary entities, usually atoms or molecules but could also be ions or electrons.
The mole is the SI base unit for the amount of substance. It provides a convenient way to count a very large number of particles, such as atoms or molecules, by relating them to a macroscopic quantity of substance.
Avogadro constant — The number of elementary entities in 1 mole of any substance.
The Avogadro constant, denoted N_A, is approximately 6.02 × 10^23 mol^-1. It represents the fixed number of particles (atoms, molecules, ions, or electrons) present in one mole of any substance.
Avogadro constant
This value is usually given to three significant figures and represents the number of particles per mole.
Amount of substance (from particles)
This formula relates the amount of substance in moles (n) to the total number of elementary entities and the Avogadro constant.
molar mass — The mass of 1 mole of substance.
Molar mass is the mass, typically expressed in grams per mole (g/mol), of one mole of a particular chemical substance. It is a crucial quantity for converting between the mass of a substance and its amount in moles.
Boyle’s law — The volume V of a gas is inversely proportional to its pressure p, provided that the temperature is held constant.
Boyle's law describes the inverse relationship between the pressure and volume of a fixed mass of gas when its temperature is kept constant. As pressure increases, volume decreases proportionally.
Boyle's Law
This equation applies to a fixed mass of gas when its temperature remains constant.
Charles’ law — The relation between initial and final volume and thermodynamic temperature of a fixed mass of gas at constant pressure, V1/T1 = V2/T2.
Charles' law states that for a fixed mass of gas at constant pressure, its volume is directly proportional to its thermodynamic temperature. This means that as temperature increases, the volume expands proportionally.
Charles' Law
This law is valid for a fixed mass of gas at constant pressure, where T must be in Kelvin.
Gay-Lussac’s law — The relation between initial and final pressure and thermodynamic temperature of a fixed mass of gas at constant volume, p1/T1 = p2/T2.
Gay-Lussac's law describes the direct proportionality between the pressure and thermodynamic temperature of a fixed mass of gas when its volume is held constant. An increase in temperature leads to a proportional increase in pressure.
Gay-Lussac's Law
This equation applies to a fixed mass of gas at constant volume, with T in Kelvin.
Combined Gas Law
This law combines Boyle's, Charles', and Gay-Lussac's laws for a fixed mass of gas, where T is the thermodynamic temperature in Kelvin.
ideal gas — One which obeys the equation of state pV ∝ T at all pressures p, volumes V and thermodynamic temperatures T.
An ideal gas is a theoretical gas that perfectly adheres to the ideal gas equation under all conditions. Real gases approximate ideal gas behavior at low pressures and high temperatures, but deviate at extreme conditions.
The empirical gas laws (Boyle's, Charles', and Gay-Lussac's) can be combined to form a single relationship known as the ideal gas equation of state. This equation describes the behavior of an ideal gas, which is defined as a gas that strictly obeys this relationship at all pressures, volumes, and thermodynamic temperatures. It provides a fundamental link between the macroscopic properties of a gas.
molar gas constant — A constant R, with value 8.3 J K–1 mol–1, used in the ideal gas equation pV = nRT, which has the same value for all gases.
The molar gas constant, R, is a universal constant that appears in the ideal gas equation when the amount of substance is expressed in moles. It has a value of approximately 8.3 J K–1 mol–1 and is the same for all ideal gases.
universal gas equation — The equation pV = nRT, also known as the equation of state for an ideal gas.
The universal gas equation, pV = nRT, is a fundamental equation describing the state of an ideal gas. It relates pressure (p), volume (V), amount of substance (n), the molar gas constant (R), and thermodynamic temperature (T).
Ideal Gas Equation (moles)
This is the universal gas equation, or equation of state for an ideal gas, where R is the molar gas constant (8.3 J K^-1 mol^-1) and T is in Kelvin.
Boltzmann constant — A constant k, with value 1.38 × 10–23 J K–1, used in the ideal gas equation pV = NkT.
The Boltzmann constant, k, is a fundamental physical constant that relates the average kinetic energy of particles in a gas to the thermodynamic temperature of the gas. It is used in the ideal gas equation when the number of individual molecules (N) is considered, rather than moles.
Ideal Gas Equation (molecules)
This form of the ideal gas equation uses the number of molecules (N) and the Boltzmann constant (k = 1.38 × 10^-23 J K^-1), with T in Kelvin.
Boltzmann constant relation
This equation shows the relationship between the Boltzmann constant (k), the molar gas constant (R), and the Avogadro constant (N_A).
Students often confuse Celsius temperature with thermodynamic (Kelvin) temperature in gas law calculations. Remember that all gas law equations require temperature to be in Kelvin (K).
Students sometimes assume Boyle's, Charles', and Gay-Lussac's laws are universally valid for all real gases. Remember these laws are restricted to ideal gases or real gases under specific conditions (low pressure, high temperature).
Students often neglect to convert units (e.g., grams to kilograms, cm^3 to m^3, kPa to Pa) when using gas law equations. Always convert all quantities to SI base units before calculation.
Students incorrectly apply the ideal gas equation (pV = nRT or pV = NkT) by using the wrong constant (R vs k) or number of particles (n vs N). Choose the correct form based on whether you have moles or the number of molecules.
The kinetic theory of gases provides a microscopic model to explain the macroscopic properties of gases. It is based on several fundamental assumptions: gas molecules are identical hard spheres, they are in continuous random motion, their volume is negligible compared to the volume of the container, and there are no intermolecular forces except during perfectly elastic collisions.
Gas pressure arises from the continuous bombardment of the container walls by rapidly moving gas molecules. Each collision with a wall results in a change in momentum for the molecule. According to Newton's second law, this change in momentum exerts a force on the wall. The sum of these forces over the entire surface area of the container constitutes the macroscopic pressure exerted by the gas.
mean-square speed — The average value of the square of the speeds of molecules, represented by
The mean-square speed, denoted as
root-mean-square speed — The quantity √
The root-mean-square (r.m.s.) speed is the square root of the mean-square speed. It provides a measure of the typical speed of molecules in a gas, taking into account the distribution of speeds. It is often used to characterize the average kinetic energy of gas molecules.
Students often misunderstand the difference between mean speed, mean speed squared, mean-square speed, and root-mean-square speed. Remember that the kinetic theory equation uses the mean-square speed,
Kinetic Theory Equation
This equation is derived from the basic assumptions of the kinetic theory of gases, linking macroscopic pressure and volume to microscopic molecular properties (number of molecules N, mass of one molecule m, and mean-square speed
Pressure in terms of density
This is an alternative form of the kinetic theory equation, where ρ is the density of the gas (Nm/V).
By equating the ideal gas equation (pV = NkT) with the kinetic theory equation (pV = 1/3 Nm
Average kinetic energy of a molecule
This is the standard formula for the average kinetic energy of a single molecule, based on its mass and mean-square speed.
Average translational kinetic energy of a molecule (temperature)
This fundamental relationship shows that the average translational kinetic energy of a molecule is directly proportional to the thermodynamic temperature (T) in Kelvin.
Mean-square speed (temperature)
This formula relates the mean-square speed of molecules to the thermodynamic temperature and the mass of a single molecule.
Root-mean-square speed
The root-mean-square speed is the square root of the mean-square speed.
Root-mean-square speed (temperature)
This formula shows that the root-mean-square speed is proportional to the square root of the thermodynamic temperature and inversely proportional to the square root of the molecular mass.
When asked to 'state' the assumptions of the kinetic theory, list them as clear, separate bullet points for maximum marks.
For questions asking you to 'derive' pV = 1/3 Nm
Be prepared to combine pV = NkT and pV = 1/3 Nm
Choose the correct form of the ideal gas equation based on the data given: use pV = nRT if the amount is in moles, and pV = NkT if you have the number of molecules.
When calculating the root-mean-square speed, remember the final step is to take the square root of the mean-square speed (c_r.m.s. = √
Always convert all quantities to SI base units (pressure to Pascals, volume to cubic metres, mass to kilograms, temperature to Kelvin) before performing calculations in gas law problems.
Definitions Bank
mole
The amount of substance which contains 6.02214076 × 10^23 elementary entities.
Avogadro constant
The number of elementary entities in 1 mole of any substance.
molar mass
The mass of 1 mole of substance.
Boyle’s law
The volume V of a gas is inversely proportional to its pressure p, provided that the temperature is held constant.
Charles’ law
The relation between initial and final volume and thermodynamic temperature of a fixed mass of gas at constant pressure, V1/T1 = V2/T2.
+7 more definitions
View all →Command Word Guide
| State | Provide concise, accurate facts without elaboration. For example, 'State the basic assumptions of the kinetic theory of gases' requires listing the assumptions clearly and separately. |
| Derive | Show the logical steps from initial principles or known equations to the final relationship. For example, 'Derive the relationship pV = 1/3 Nm<c^2>' requires explaining how molecular movement causes pressure and showing the mathematical progression. |
| Deduce | Reach a conclusion from given information or principles. For example, 'Deduce that the average translational kinetic energy of a molecule is 3/2 kT' requires combining the ideal gas equation and the kinetic theory equation. |
| Explain | Give a clear account of how something works or why something happens, including relevant theory. For example, 'Explain how molecular movement causes the pressure exerted by a gas' requires describing collisions and momentum change. |
+1 more
View all →Common Mistakes
Confusing Celsius temperature with thermodynamic (Kelvin) temperature in gas law calculations.
Always convert Celsius temperatures to Kelvin by adding 273 (or 273.15 for higher precision) before using them in gas law equations.
Assuming Boyle's, Charles', and Gay-Lussac's laws are universally valid for all real gases.
Remember these laws, and the ideal gas equation, apply perfectly only to ideal gases. Real gases deviate from ideal behavior, especially at high pressures and low temperatures.
Misunderstanding the difference between mean speed, mean speed squared, mean-square speed, and root-mean-square speed.
The kinetic theory equation uses the mean-square speed (<c^2>), which is the average of the squares of individual speeds. The root-mean-square speed is the square root of this value.
+2 more
View all →This chapter introduces thermodynamics, focusing on internal energy as the sum of molecular kinetic and potential energies. It presents the first law of thermodynamics, ΔU = q + w, which describes energy conservation in terms of internal energy change, thermal energy transfer, and work done. The chapter also covers calculating work done by expanding gases and applying the first law to various thermodynamic processes.
Thermodynamics — Thermodynamics is the study of processes involving the transfer of thermal energy and the doing of work.
Thermodynamics is a branch of physics concerned with heat and its relation to other forms of energy and work. It explores how energy is transferred and transformed within systems, particularly focusing on thermal energy and mechanical work.
internal energy — The sum of the potential energies and kinetic energies of all the molecules, owing to their random motion, is called the internal energy of the gas.
Internal energy (U) represents the total energy stored within a system at a molecular level. It comprises the random kinetic energies of the molecules due to their motion and the potential energies arising from intermolecular forces. This energy is determined by the state of the system.
adiabatic change — A thermodynamic change where no thermal energy is allowed to enter or leave the system.
An adiabatic change is a process where a system is perfectly insulated, preventing any transfer of thermal energy (q = 0) with its surroundings. This means any change in internal energy is solely due to work done on or by the system.
isothermal change — A change which takes place at constant temperature.
An isothermal change is a thermodynamic process that occurs at a constant temperature. For an ideal gas, this implies that the internal energy remains constant (ΔU = 0) because internal energy is directly related to temperature.
Work done by gas at constant pressure
This formula calculates the work done by a gas when its volume changes at a constant pressure. Positive w indicates work done by the gas (volume increases), while negative w indicates work done on the gas (volume decreases).
First law of thermodynamics
This law expresses the conservation of energy. ΔU is the increase in internal energy, q is the thermal energy transferred to the system (positive if added), and w is the work done on the system (positive if done on the system).
Internal energy is the sum of the random kinetic and potential energies of the molecules within a system. A rise in the temperature of an object directly corresponds to an increase in its internal energy. This is because temperature is related to the average kinetic energy of the molecules, and an increase in average kinetic energy contributes to the total internal energy.
Students often confuse temperature with internal energy. Remember that temperature is the average kinetic energy per molecule, while internal energy is the total kinetic and potential energy of all molecules in the system.
When a gas expands at a constant pressure, it does work on its surroundings. The work done by the gas can be calculated using the formula w = pΔV, where p is the constant pressure and ΔV is the change in volume. It is crucial to distinguish between work done by the gas and work done on the gas, as this affects the sign convention in the first law of thermodynamics.
Students often incorrectly apply the sign convention for work done. In the equation ΔU = q + w, work done by the gas is negative, while work done on the gas is positive.
For calculation questions, explicitly state the formula w = pΔV and show your substitutions with correct unit conversions to secure method marks. Always convert volume from cm³ to m³ and pressure to Pascals (Pa).
The first law of thermodynamics, ΔU = q + w, is a statement of the conservation of energy. It relates the change in a system's internal energy (ΔU) to the thermal energy transferred to the system (q) and the work done on the system (w). This law is fundamental to understanding energy transformations in various thermodynamic processes, including adiabatic changes and phase transitions.
When stating the first law of thermodynamics, always define each term in the equation ΔU = q + w and specify the sign convention for work done (positive w means work done on the system).
During a phase change, such as melting or boiling, thermal energy is supplied to the substance, but its temperature remains constant. This indicates that the kinetic energy of the molecules does not change. Instead, the supplied thermal energy increases the potential energy of the molecules as they overcome intermolecular forces to change their arrangement or separation.
Students often assume that temperature must rise when heat is added. During a phase change, heat supplied increases internal energy by increasing potential energy, while temperature (and kinetic energy) stays constant.
If a question describes a system as 'perfectly insulated' or a process as 'happening very quickly', you should identify this as an adiabatic process where q = 0.
To explain why an expanding gas cools down adiabatically, state that the gas does work on its surroundings (w is negative), and since q=0, the internal energy (ΔU) must decrease, leading to a drop in temperature.
Definitions Bank
internal energy
The sum of the potential energies and kinetic energies of all the molecules, owing to their random motion, is called the internal energy of the gas.
Thermodynamics
Thermodynamics is the study of processes involving the transfer of thermal energy and the doing of work.
adiabatic change
A thermodynamic change where no thermal energy is allowed to enter or leave the system.
isothermal change
A change which takes place at constant temperature.
Common Mistakes
Confusing temperature with internal energy.
Temperature is the average kinetic energy per molecule, while internal energy is the total kinetic and potential energy of all molecules.
Incorrectly applying the sign convention for work done in ΔU = q + w.
Work done by the gas is negative (-w), while work done on the gas is positive (+w).
Assuming internal energy only relates to kinetic energy for real gases, neglecting potential energy contributions.
For real gases, internal energy includes potential energy from intermolecular forces, which changes when the volume changes.
+2 more
View all →This chapter explores oscillations, focusing on simple harmonic motion (SHM) where acceleration is proportional to and opposite to displacement. It covers the kinematics and energy transformations in SHM, including the interchange between kinetic and potential energy. Finally, the chapter examines damped and forced oscillations, culminating in the phenomenon of resonance.
Oscillation — One complete movement from the starting or rest position, move up, then down and finally back up to the rest position.
An oscillation describes a repetitive motion around an equilibrium point. It encompasses the full cycle of movement, returning to the initial state.
Period T — The time taken for one complete oscillation or vibration.
The period, denoted by T, is a fundamental characteristic of an oscillation, representing the duration for a single full cycle of motion to occur.
Frequency f — The number of oscillations or vibrations per unit time.
Frequency, f, quantifies how often an oscillation repeats within a given timeframe. It is inversely related to the period.
Frequency from Period
Relates frequency to the period of oscillation.
Displacement — The distance from the equilibrium position.
Displacement refers to the instantaneous position of an oscillating object relative to its central, stable equilibrium point. It can be positive or negative depending on the direction from equilibrium.
Amplitude — The maximum displacement.
The amplitude represents the greatest distance an oscillating object moves from its equilibrium position. It is a measure of the 'size' of the oscillation.
Angular frequency — The square root of the constant \omega^2 (that is, \omega) in the defining equation for simple harmonic motion.
Angular frequency, \omega, is a measure of the rate of oscillation in radians per second. It is closely related to the frequency and period of the motion.
Angular frequency from frequency
Relates angular frequency to frequency.
Angular frequency from period
Relates angular frequency to period.
Simple harmonic motion (s.h.m.) — The motion of a particle about a fixed point such that its acceleration a is proportional to its displacement x from the fixed point, and is in the opposite direction.
Simple harmonic motion (SHM) is a specific type of oscillatory motion characterized by a restoring force directly proportional to the displacement and always directed towards the equilibrium position. This results in a sinusoidal variation of displacement with time.
When asked to define SHM, you must state both conditions: acceleration is proportional to displacement, and it is in the opposite direction (or towards a fixed point).
Harmonic oscillators — Oscillators which move in s.h.m.
Harmonic oscillators are systems that exhibit simple harmonic motion, meaning their restoring force follows Hooke's Law, leading to a predictable, sinusoidal oscillation.
Restoring force — A force that is always acting towards the fixed point from which displacement is measured.
The restoring force is crucial for oscillations, as it always acts to bring the oscillating object back to its equilibrium position. In SHM, this force is directly proportional to the displacement.
Defining equation for Simple Harmonic Motion
Acceleration is proportional to displacement and in the opposite direction.
Students often misinterpret the negative sign in a = -\omega^2x as indicating a negative acceleration. Remember, it signifies that acceleration is in the opposite direction to displacement, always directed towards the equilibrium point.
The motion of an object undergoing simple harmonic motion can be described using sinusoidal equations. The displacement, velocity, and acceleration all vary sinusoidally with time, but with specific phase relationships. These equations allow for the prediction of the object's position, speed, and rate of change of speed at any given moment.
Displacement (sinusoidal)
Used when the particle is at equilibrium position (x=0) at t=0.
Displacement (cosinusoidal)
Used when the particle is at maximum displacement (x=x_0) at t=0.
Velocity (cosinusoidal)
Used when displacement is sinusoidal (x = x_0 sin \omega t).
Maximum speed
Relates maximum speed to amplitude and angular frequency.
Velocity from displacement
Relates velocity to displacement, amplitude, and angular frequency.
Acceleration (sinusoidal)
Used when displacement is sinusoidal (x = x_0 sin \omega t).
Students often incorrectly use degrees instead of radians for angles in sinusoidal equations for displacement, velocity, and acceleration. Always ensure your calculator is in radian mode for these calculations.
Students often fail to understand that velocity is zero at maximum displacement and maximum at zero displacement, and similarly for acceleration. Remember, acceleration is maximum at maximum displacement and zero at zero displacement.
For questions involving graphs, remember the phase differences: velocity leads displacement by \pi/2 radians (90°), and acceleration leads velocity by \pi/2 radians.
In simple harmonic motion, energy is continuously transformed between kinetic and potential forms. As the oscillating object moves towards the equilibrium position, its kinetic energy increases while potential energy decreases. Conversely, as it moves away from equilibrium, kinetic energy is converted back into potential energy. The total energy of the system, however, remains constant throughout the oscillation.
Kinetic energy in SHM
Kinetic energy at a given displacement x.
Restoring force in SHM
Force acting on the particle at displacement x.
Potential energy in SHM
Potential energy at a given displacement x.
Total energy in SHM
Total energy of the oscillating system, which is constant.
Total energy as sum of kinetic and potential
Expresses the law of conservation of energy for SHM.
In energy calculations, remember that the total energy E = \frac{1}{2}m\omega^2x_0^2 is constant and is equal to the maximum kinetic energy (at x=0) and the maximum potential energy (at x=x_0).
Isochronous — Oscillators that have a constant time period.
Isochronous oscillators maintain a consistent time period regardless of their amplitude, within certain limits. This property is often approximated in simple harmonic motion, though not perfectly true for all oscillating systems at large amplitudes.
Students often assume a pendulum is perfectly isochronous even at large amplitudes. In reality, it is only approximately so; the isochronism holds best for small angles of oscillation.
Free oscillations — Oscillations when the only external force acting on it is the restoring force.
Free oscillations occur when a system oscillates without any external driving forces or resistive forces. The object vibrates solely under the influence of its internal restoring force, at its natural frequency.
Natural frequency — The frequency at which a vibrating object undergoes free (undamped) oscillations.
Every oscillating system has a natural frequency, which is the specific frequency at which it will oscillate if left undisturbed and without any damping forces acting upon it.
Damped — Oscillations where frictional and other resistive forces cause the oscillator’s energy to be dissipated.
Damped oscillations occur when resistive forces, such as friction or air resistance, act on an oscillating system. These forces cause the system's mechanical energy to be gradually converted into other forms, leading to a decrease in the amplitude of oscillation over time.
Light damping — Damping where the amplitude of the oscillations decreases gradually with time.
In light damping, the resistive forces are relatively small, allowing the system to complete many oscillations before its amplitude significantly diminishes. The period of oscillation remains approximately constant.
Critical damping — The point where the displacement decreases to zero in the shortest time, without any oscillation.
Critical damping represents the optimal level of damping where the system returns to its equilibrium position as quickly as possible without oscillating. This is often desired in systems like car shock absorbers.
Overdamping or heavy damping — Any further increase in damping beyond critical damping, where the displacement decreases to zero in a longer time than for critical damping.
Heavy damping occurs when the resistive forces are very large, causing the system to return to equilibrium slowly without oscillating. The time taken to reach equilibrium is longer than with critical damping.
When sketching graphs of damped oscillations, ensure for light damping that the period is constant as the amplitude decays exponentially.
Forced vibrations — When a vibrating object is made to vibrate at the frequency of an applied periodic force, rather than at its natural frequency.
Forced vibrations occur when an external periodic force is applied to an oscillating system, compelling it to oscillate at the frequency of the applied force, known as the driving frequency.
Resonance — Occurs when the natural frequency of vibration of an object is equal to the driving frequency, giving a maximum amplitude of vibration.
Resonance is a phenomenon where a system's amplitude of oscillation becomes maximal when the frequency of an applied driving force matches the system's natural frequency. This can lead to very large and potentially destructive vibrations.
Resonant frequency — The frequency at which the driving frequency equals the natural frequency of oscillation, and the amplitude of the oscillations reaches a maximum.
The resonant frequency is the specific driving frequency at which resonance occurs, leading to the largest possible amplitude of oscillation for a given system.
Resonance curve — A graph illustrating the variation with driving frequency of the amplitude of vibration of a mass.
A resonance curve visually represents how the amplitude of a forced oscillation changes as the driving frequency is varied. It typically shows a peak at the resonant frequency.
To score full marks explaining resonance, state that the driving frequency must equal the natural frequency of the oscillating system, which causes the amplitude to reach its maximum.
Students often confuse frequency with angular frequency, or not converting between them correctly (e.g., forgetting the 2\pi factor). Always remember the relationship \omega = 2\pi f.
Students often fail to convert units (e.g., g to kg, mm to m) before performing calculations, leading to incorrect numerical answers. Always convert all given values to standard SI units before substituting them into equations.
Definitions Bank
Oscillation
One complete movement from the starting or rest position, move up, then down and finally back up to the rest position.
Period T
The time taken for one complete oscillation or vibration.
Frequency f
The number of oscillations or vibrations per unit time.
Displacement
The distance from the equilibrium position.
Amplitude
The maximum displacement.
+15 more definitions
View all →Command Word Guide
| Define | For 'Simple Harmonic Motion', state both proportionality of acceleration to displacement AND that it's in the opposite direction (or towards a fixed point). |
| Describe | When describing energy changes in SHM, explain the continuous interchange between kinetic and potential energy, and state that total energy remains constant. |
| Explain | For 'Resonance', explain that the driving frequency equals the natural frequency, leading to a maximum amplitude of oscillation. |
| Sketch | When sketching damped oscillations, ensure the amplitude decreases over time, but for light damping, the period remains constant. |
+1 more
View all →Common Mistakes
Confusing frequency (f) with angular frequency (\omega) or forgetting the 2\pi factor in conversions.
Always remember the relationship \omega = 2\pi f and convert carefully.
Using degrees instead of radians for angles (\omega t) in sinusoidal equations.
Ensure your calculator is set to radian mode when working with x = x_0 sin \omega t, v = v_0 cos \omega t, etc.
Misinterpreting the negative sign in a = -\omega^2x as always meaning negative acceleration.
The negative sign indicates that acceleration is always directed opposite to the displacement, i.e., towards the equilibrium point.
+3 more
View all →This chapter explores electric fields, regions where charges experience forces, defining electric field strength and potential. It covers the behavior of charges in uniform fields, Coulomb's law for point charges, and the relationship between electric field strength and potential.
Electric field — A region of space where a stationary electric charge experiences a force.
Electric fields are fundamental to understanding how charges interact. Any charged particle placed within such a region will experience an electric force.
Electric field strength — The force per unit charge acting on a small stationary positive charge placed at that point.
Electric field strength quantifies the intensity of an electric field. It is a vector quantity, with its direction defined by the force a positive test charge would experience.
Force on a charge in an electric field
This formula applies to a charge q in an electric field E, allowing calculation of the force experienced by the charge.
Electric fields can be visually represented using field lines. These lines indicate the direction of the force on a positive test charge, originating from positive charges and terminating on negative charges. The density of the field lines signifies the strength of the electric field.
Coulomb’s law — The force between two point charges is proportional to the product of the charges and inversely proportional to the square of the distance between them.
Coulomb's law describes the fundamental electrostatic interaction between two point charges. It quantifies the attractive or repulsive force based on their magnitudes and separation.
Coulomb's Law
This formula calculates the force between two point charges, Q1 and Q2, separated by a distance r in a vacuum or air. ε₀ is the permittivity of free space.
Permittivity of free space — The quantity \epsilon_0, which is a constant in Coulomb's law for charges in a vacuum.
The permittivity of free space, ε₀, is a physical constant that represents the absolute dielectric permittivity of a vacuum. It is crucial for calculating electric forces and fields in free space.
Electric field strength of a point charge
This formula determines the electric field strength at a distance r from an isolated point charge Q in a vacuum.
Students often confuse the inverse square relationship for electric field strength and force with the inverse proportionality for electric potential. Remember that electric field strength (E) and force (F) are proportional to 1/r², while electric potential (V) is proportional to 1/r.
A uniform electric field exists between two parallel metal plates with a potential difference across them. In such a field, the electric field strength is constant in magnitude and direction. Charged particles moving within these fields experience a constant force, leading to predictable motion.
Electric field strength in a uniform field
This formula is used to calculate the electric field strength E in a uniform field, where ΔV is the potential difference across a distance Δd.
When a charged particle enters a uniform electric field, it experiences a constant electric force. This force causes the particle to accelerate. If the particle enters the field perpendicular to the field lines, its path will be parabolic, similar to projectile motion under gravity.
Acceleration of a charged particle in an electric field
This formula calculates the acceleration of a charged particle of mass m and charge q in an electric field E.
Kinetic energy gained by a charged particle in a uniform field
This formula relates the work done by the electric field (Vq) to the gain in kinetic energy of a particle (½mv²) moving from rest through a potential difference V.
For a charged particle moving in a uniform field, first calculate the constant electric force (F=qE) and acceleration (a=F/m), then apply the standard kinematic (suvat) equations. If a particle enters a uniform field at a right angle, its path is parabolic. Treat its horizontal motion (constant velocity) and vertical motion (constant acceleration) independently.
Electric potential — The work done per unit positive charge in bringing a small test charge from infinity to the point.
Electric potential is a scalar quantity that describes the potential energy per unit charge at a given point in an electric field. It is defined relative to infinity, where the potential is considered zero.
Electric potential of a point charge
This formula calculates the electric potential at a distance r from an isolated point charge Q in a vacuum.
Potential gradient — The rate of change of electric potential with distance.
The potential gradient describes how rapidly the electric potential changes over a given distance. It is directly related to the electric field strength.
Electric field strength is the negative potential gradient. This means that the electric field points in the direction of decreasing electric potential. The relationship E = -ΔV/Δd highlights how a change in potential over a distance gives rise to an electric field.
Electric potential energy of two point charges
This formula represents the work done to bring two isolated point charges, Q and q, to a separation r from infinity.
Students often confuse the symbol 'E' for electric field strength with 'E' for energy. Remember that 'E' for electric field strength has units of N C⁻¹ or V m⁻¹, while energy (like kinetic energy or potential energy Eₚ) is measured in Joules (J).
For 'define' questions, use the precise wording from the syllabus: e.g., 'force per unit positive charge' for electric field strength and 'work done per unit positive charge from infinity' for electric potential.
When drawing electric field lines, ensure they have direction arrows (pointing from positive to negative), they never cross, and their density represents the field strength.
Use the principle of energy conservation when a particle is accelerated through a potential difference V: the work done on the charge (W=Vq) equals its gain in kinetic energy (ΔKE = ½mv²).
Definitions Bank
Electric field
A region of space where a stationary electric charge experiences a force.
Electric field strength
The force per unit charge acting on a small stationary positive charge placed at that point.
Coulomb’s law
The force between two point charges is proportional to the product of the charges and inversely proportional to the square of the distance between them.
Permittivity of free space
The quantity \epsilon_0, which is a constant in Coulomb's law for charges in a vacuum.
Electric potential
The work done per unit positive charge in bringing a small test charge from infinity to the point.
+1 more definitions
View all →Common Mistakes
Confusing the symbol 'E' for electric field strength with 'E' for energy.
Remember that 'E' for electric field strength has units of N C⁻¹ or V m⁻¹, while energy (like kinetic energy or potential energy Eₚ) is measured in Joules (J).
Assuming gravitational effects are significant for charged particles in electric fields, when they are often negligible.
For subatomic particles like electrons, the gravitational force is almost always negligible compared to the electric force and can be ignored unless specified.
Applying Coulomb's law to extended charged objects without considering them as point charges at their center (for spherical conductors outside the sphere).
Apply Coulomb's law and the formulas for point charges only to actual point charges or for points outside a uniformly charged sphere.
+2 more
View all →This chapter introduces capacitance, defining it for various conductors and explaining how to calculate combined capacitance for series and parallel arrangements. It details how capacitors store electric potential energy and covers their exponential discharge through a resistor, introducing the time constant.
Capacitance — The ratio of charge Q to potential V for a conductor.
Capacitance quantifies a conductor's ability to store electric charge at a given potential. A higher capacitance means more charge can be stored for the same potential difference.
Farad — The unit of capacitance (symbol F). One farad is one coulomb per volt.
The Farad is the SI unit for capacitance. A capacitor with a capacitance of one Farad will store one Coulomb of charge when a potential difference of one Volt is applied across it.
Capacitors — Circuit components which store charge and, therefore, have capacitance.
Capacitors are fundamental electronic components designed to store electrical energy in an electric field. They achieve this by accumulating electric charge on their plates.
Capacitance (general)
This fundamental formula defines capacitance (C) as the ratio of charge (Q) stored to the potential difference (V) across the conductor. It applies to both isolated conductors and parallel plate capacitors.
Capacitance can be applied to various conductors. For an isolated spherical conductor of radius 'r', its capacitance is directly proportional to its radius and the permittivity of free space. This shows how the physical dimensions of a conductor influence its ability to store charge.
Capacitance of an isolated spherical conductor
This formula calculates the capacitance (C) for an isolated conducting sphere of radius (r), where \epsilon_0 is the permittivity of free space.
Parallel-plate capacitor — The simplest capacitor in an electric circuit consisting of two metal plates, with an air gap between them which acts as an insulator.
A parallel-plate capacitor is a common type of capacitor, comprising two conductive plates separated by an insulating material. The air gap between the plates prevents charge from flowing directly between them, allowing for charge accumulation.
Dielectric — The insulating material placed between the plates of a capacitor.
A dielectric is an electrical insulator that can be polarised by an applied electric field. Placing a dielectric between the plates of a capacitor increases its capacitance compared to a vacuum or air gap.
Relative permittivity — The capacitance of a parallel-plate capacitor with the dielectric between the plates divided by the capacitance of the same capacitor with a vacuum between the plates.
Relative permittivity, \epsilon_r, is a dimensionless quantity that indicates how much an insulating material increases the capacitance of a capacitor compared to a vacuum. It reflects the dielectric's ability to store electrical energy.
Capacitance of a parallel-plate capacitor (air/vacuum)
This formula calculates the capacitance (C) of a parallel-plate capacitor with air or a vacuum between the plates, where A is the plate area, d is the distance between plates, and \epsilon_0 is the permittivity of free space.
Capacitance of a parallel-plate capacitor (with dielectric)
This formula extends the parallel-plate capacitor calculation to include a dielectric material, incorporating its relative permittivity (\epsilon_r).
Students often confuse the symbol 'C' for capacitance with 'C' for coulomb (unit of charge). Remember that 'C' as a variable denotes capacitance, while 'C' as a unit denotes coulombs.
Capacitors can be connected in series or parallel arrangements, which affects their combined capacitance. The rules for combining capacitors are distinct from those for resistors, leading to different overall capacitance values depending on the configuration.
Combined capacitance in series
For capacitors connected in series, the reciprocal of the total capacitance (C) is the sum of the reciprocals of the individual capacitances (C_1, C_2, etc.).
Combined capacitance in parallel
For capacitors connected in parallel, the total capacitance (C) is simply the sum of the individual capacitances (C_1, C_2, etc.).
Students often confuse the series/parallel rules for capacitors with those for resistors. Remember that for capacitors, series connections use the reciprocal sum, while parallel connections use a simple sum, which is the opposite of resistors.
Capacitors store electric potential energy in the electric field between their plates. This energy can be determined from the area under a potential-charge graph, which represents the work done to charge the capacitor. The energy stored is proportional to the square of the potential difference or the square of the charge.
Energy stored in a capacitor (charge and potential)
This formula calculates the electric potential energy (E_p) stored in a capacitor, derived from the area under the potential-charge graph, where Q is the charge and V is the potential difference.
Energy stored in a capacitor (capacitance and potential)
An alternative form for calculating stored energy (E_p), using capacitance (C) and potential difference (V), derived from E_p = 1/2 QV and C = Q/V.
Energy stored in a capacitor (charge and capacitance)
Another alternative form for calculating stored energy (E_p), using charge (Q) and capacitance (C), also derived from E_p = 1/2 QV and C = Q/V.
Students often believe a capacitor stores net charge. Instead, a capacitor separates and stores positive and negative charges on its plates, meaning the net charge on the component as a whole is zero.
When calculating energy change during discharge, students might incorrectly apply the formula (V1^2 - V2^2) as (V1 - V2)^2. Remember that the energy change is the difference between the initial and final stored energies, calculated as 1/2 C(V_initial^2 - V_final^2).
When a capacitor discharges through a resistor, the charge, potential difference, and current all decrease exponentially over time. This process is governed by the resistance and capacitance of the circuit, leading to characteristic decay curves.
Exponential decay (general)
This is the general form for exponential decay curves, where x is the decaying quantity, x_0 is its initial value, k is the decay constant, and t is time.
Charge decay during discharge
This equation describes the exponential decay of charge (Q) on a capacitor during discharge, where Q_0 is the initial charge, C is capacitance, R is resistance, and t is time.
Current decay during discharge
This equation describes the exponential decay of current (I) during capacitor discharge, where I_0 is the initial current, C is capacitance, R is resistance, and t is time.
Potential difference decay during discharge
This equation describes the exponential decay of potential difference (V) across a capacitor during discharge, where V_0 is the initial potential difference, C is capacitance, R is resistance, and t is time.
Time constant — The time for the charge to have decreased to 1/e (or 1/2.718) of its initial value.
The time constant (\tau) is a crucial characteristic of an RC circuit, representing the time it takes for the charge, potential difference, or current to fall to approximately 37% (1/e) of its initial value during discharge. It indicates the rate of decay.
Time constant
This formula defines the time constant (\tau) for a capacitor-resistor circuit, indicating the rate of decay for charge, potential difference, and current during discharge.
When sketching discharge graphs (Q, V, or I vs. t), ensure they are exponential decay curves that start at the initial value (Q₀, V₀, I₀) and are asymptotic to the time axis.
To find the time constant (\tau) from a discharge graph, find the time it takes for the quantity to drop to 37% of its initial value, or draw a tangent at t=0 and find where it intercepts the time axis.
When using the decay equation x = x₀e⁻ᵗ/ᴿᶜ, ensure all units are in their base SI form (Farads, Ohms, Seconds) before substituting them into the formula.
For questions involving a potential-charge (V-Q) graph, remember that the energy stored is the area under the graph, and the capacitance is the reciprocal of the gradient.
Be prepared to derive the formulae for capacitors in series and parallel using the definition C = Q/V and principles of charge conservation and potential division.
Definitions Bank
Capacitance
The ratio of charge Q to potential V for a conductor.
Farad
The unit of capacitance (symbol F). One farad is one coulomb per volt.
Capacitors
Circuit components which store charge and, therefore, have capacitance.
Parallel-plate capacitor
The simplest capacitor in an electric circuit consisting of two metal plates, with an air gap between them which acts as an insulator.
Dielectric
The insulating material placed between the plates of a capacitor.
+2 more definitions
View all →Common Mistakes
Confusing the symbol 'C' for capacitance with 'C' for coulomb (unit of charge).
Remember that 'C' as a variable represents capacitance, while 'C' as a unit represents coulombs.
Believing a capacitor stores net charge.
A capacitor separates and stores positive and negative charges on its plates; the net charge on the component as a whole is zero.
Incorrectly applying the formula for energy change during discharge, specifically (V1^2 - V2^2) is not equal to (V1 - V2)^2.
The energy change is the difference between the initial and final stored energies, calculated as 1/2 C(V_initial^2 - V_final^2).
+1 more
View all →This chapter explores magnetic fields, regions where magnets or moving charges experience a force. It covers the forces on current-carrying conductors and moving charges, defines magnetic flux density, and details magnetic fields produced by currents. The chapter concludes with the Hall effect and electromagnetic induction, including Faraday's and Lenz's laws.
north-seeking pole — The pole of a freely suspended magnet that points to the north.
When a magnet is freely suspended, one end consistently points towards the Earth's geographic north. This end is designated as the north-seeking pole, indicating its alignment with the Earth's magnetic field.
south-seeking pole — The pole of a freely suspended magnet that points to the south.
Conversely, the pole of a freely suspended magnet that points towards the Earth's geographic south is known as the south-seeking pole. It aligns itself opposite to the north-seeking pole in response to the Earth's magnetic field.
magnetic field — A region of space where a magnetic pole experiences a force.
A magnetic field is an invisible area surrounding a magnet or a moving electric charge where magnetic forces are exerted. Within this region, another magnetic pole will experience a push or pull, indicating the presence and direction of the field.
neutral point — A point where there is no resultant magnetic field because two fields are equal in magnitude but opposite in direction.
A neutral point occurs when two or more magnetic fields cancel each other out precisely. At such a point, the magnetic forces from different sources are equal in strength but act in opposing directions, resulting in a net magnetic field of zero.
Magnetic fields are visually represented using field lines, which emerge from north poles and enter south poles. These lines indicate the direction of the force a north-seeking pole would experience and their density signifies the strength of the field. Magnetic field lines are continuous and never cross each other.
Students often assume magnetic field lines can cross or touch, which is incorrect. Remember that magnetic field lines are continuous and never intersect.
magnetic flux density — Numerically equal to the force per unit current per unit length on a straight wire placed at right angles to a uniform magnetic field.
Magnetic flux density, denoted by B, quantifies the strength of a magnetic field. It is defined by the force experienced by a current-carrying wire perpendicular to the field. A higher magnetic flux density means a stronger magnetic field and thus a greater force on the wire.
tesla — The uniform magnetic flux density which, acting normally to a long straight wire carrying a current of 1 ampere, causes a force per unit length of 1 N m−1 on the conductor.
The tesla (T) is the SI unit for magnetic flux density. One tesla represents a very strong magnetic field, where a one-meter length of wire carrying one ampere of current experiences a force of one newton when placed perpendicular to the field.
Force on a current-carrying conductor
Used to calculate the force on a straight wire in a uniform magnetic field. The angle \theta is between the conductor and the magnetic field direction.
When defining magnetic flux density, be precise: state it is the force per unit current per unit length on a wire placed at right angles to the field.
Students often incorrectly apply the angle \theta in F = BIL sin \theta, especially when the conductor is parallel to the field. Remember that if the conductor is parallel to the field, sin \theta = 0, and the force is zero.
Just as a current-carrying wire experiences a force in a magnetic field, individual moving charges also experience a force. This force is perpendicular to both the velocity of the charge and the magnetic field direction. The direction of this force can be determined using Fleming's left-hand rule, where the second finger represents conventional current (opposite to electron flow).
Force on a moving charge
Used to calculate the force on an individual charged particle moving in a uniform magnetic field. The angle \theta is between the velocity of the particle and the magnetic field direction.
Students often confuse Fleming's left-hand rule (motor effect) with Fleming's right-hand rule (generator effect). Remember, the left-hand rule is for determining the force on a current or moving charge in a magnetic field.
Forgetting that the second finger in Fleming's rules refers to conventional current, which is opposite to electron flow, is a common error. Always consider the direction of conventional current.
Radius of circular path for charged particle
Applies when a charged particle moves perpendicular to a uniform magnetic field, resulting in circular motion where the magnetic force provides the centripetal force.
specific charge — The ratio of the charge q on a particle and its mass m.
Specific charge is a fundamental property of charged particles, representing the amount of charge per unit of mass. It is a crucial quantity in determining how particles behave in electric and magnetic fields, influencing their acceleration and deflection.
Specific charge of electron (fine-beam tube)
Derived for electrons accelerated through a potential difference and then moving perpendicular to a magnetic field, typically in a fine-beam tube experiment.
In calculations involving circular motion (e.g., a fine-beam tube), explicitly state that the magnetic force provides the centripetal force (Bqv = mv²/r) to earn method marks.
velocity selector — An arrangement of perpendicular electric and magnetic fields that allows only charged particles with a specific velocity to pass undeviated.
A velocity selector is a device that uses balanced electric and magnetic forces to isolate charged particles moving at a particular speed. Only particles for which the electric force precisely cancels the magnetic force will travel in a straight line, while others are deflected.
Velocity selection condition
Applies when electric and magnetic forces balance, allowing particles of a specific velocity to pass undeviated through perpendicular electric and magnetic fields.
Electric currents are sources of magnetic fields. The pattern of these fields depends on the geometry of the current-carrying conductor. For a long straight wire, the field lines are concentric circles around the wire. For a flat circular coil, the field resembles that of a bar magnet, with lines passing through the coil's center. A long solenoid produces a uniform magnetic field inside, similar to a bar magnet, and weaker fields outside.
When sketching field patterns for wires or solenoids, use concentric circles or parallel lines with clear, correct arrows indicating the direction. Remember the three-dimensional nature of magnetic fields.
solenoid — A long coil.
A solenoid is essentially a long, cylindrical coil of wire. When an electric current passes through the wire, it generates a magnetic field, which is particularly uniform and strong inside the coil, making solenoids useful for creating controlled magnetic fields.
electromagnets — Magnets whose magnetic field can be switched off by switching off the current in the coil, typically made by winding a coil on a ferrous core.
Electromagnets are temporary magnets created by passing current through a coil, often wound around a soft iron core. Unlike permanent magnets, their magnetic field can be controlled by varying or switching off the current, making them versatile for applications requiring adjustable magnetic forces.
The Hall effect describes the generation of a voltage across a conductor when it carries a current and is placed in a magnetic field perpendicular to the current. This Hall voltage arises because the magnetic force deflects charge carriers to one side of the conductor, creating a charge separation and thus a potential difference. The magnitude of the Hall voltage depends on the magnetic flux density, current, and properties of the conductor.
Hall voltage — A potential difference that develops across a conductor when charge carriers are forced to one side by a magnetic field.
When charge carriers (like electrons) move through a conductor in a magnetic field, the magnetic force pushes them to one side. This accumulation of charge creates an electric field and a measurable potential difference across the conductor, known as the Hall voltage.
Hall probe — An apparatus consisting of a thin slice of semiconductor material used to measure magnetic flux density, based on the Hall effect.
A Hall probe is a practical device that utilizes the Hall effect to measure magnetic flux density. It typically uses a semiconductor material because semiconductors have a lower charge carrier density, leading to a larger and more easily measurable Hall voltage for a given magnetic field.
Hall voltage
Used to calculate the potential difference across a conductor due to the Hall effect, where 'n' is the number density of charge carriers and 't' is the thickness of the conductor.
magnetic flux — The product of the magnetic flux density and the area normal to the lines of flux.
Magnetic flux (\Phi) is a measure of the total number of magnetic field lines passing through a given area. It quantifies the 'amount' of magnetic field penetrating a surface, and its value depends on both the strength of the magnetic field and the orientation of the area relative to the field.
weber — The unit of magnetic flux, equal to one tesla metre-squared (T m2).
The weber (Wb) is the SI unit for magnetic flux. One weber represents the magnetic flux through an area of one square meter when the magnetic flux density is one tesla and the field lines are perpendicular to the area.
Magnetic flux
Calculates magnetic flux through an area; \theta is the angle between the magnetic field and the normal to the area. For the area normal to the field, \theta = 90\degree, and \sin \theta = 1.
magnetic flux linkage — The product of the magnetic flux through a coil and the number of turns on the coil (N\Phi).
Magnetic flux linkage considers the total magnetic flux passing through all turns of a coil. If a coil has N turns, and each turn experiences a magnetic flux \Phi, then the total flux linkage is N\Phi. This quantity is crucial for understanding electromagnetic induction in coils.
Magnetic flux linkage
Calculates the total magnetic flux passing through all turns of a coil, assuming the magnetic field is uniform and perpendicular to the area of each turn.
electromagnetic induction — The effect where an e.m.f. is induced by a changing magnetic field.
Electromagnetic induction is the phenomenon where a changing magnetic field through a conductor or coil generates an electromotive force (e.m.f.). This induced e.m.f. can drive an induced current if the circuit is complete, forming the basis of generators and transformers.
Students often do not understand that an e.m.f. is induced only when there is a *change* in magnetic flux linkage, not just a constant magnetic field. A static magnetic field does not induce an e.m.f.
Faraday's law quantifies electromagnetic induction, stating that the magnitude of the induced e.m.f. is directly proportional to the rate of change of magnetic flux linkage. This means a faster change in flux linkage will result in a larger induced e.m.f. This law is fundamental to the operation of many electrical devices.
Faraday's Law and Lenz's Law
Relates induced e.m.f. to the rate of change of magnetic flux linkage; the negative sign indicates the direction of the induced e.m.f. (Lenz's Law) which opposes the change in flux.
Faraday's Law (average e.m.f.)
Used for calculating average induced e.m.f. when flux linkage changes linearly or over a small time interval.
Lenz's law provides the direction of the induced e.m.f. and current. It states that the direction of an induced current is always such that it opposes the change in magnetic flux that caused it. This opposition is a manifestation of the conservation of energy, as the induced current creates its own magnetic field to counteract the original change.
For questions on Lenz's law, structure your answer in steps: 1. State the change in flux. 2. State the direction of the induced field needed to oppose the change. 3. Use a hand rule to determine the direction of the induced current.
eddy currents — Currents induced in a metal disc when it spins in a magnetic field, varying in magnitude and direction.
Eddy currents are circulating currents induced within a bulk conductor, such as a metal disc, when it moves through a magnetic field or when the magnetic field through it changes. These currents form closed loops within the conductor and are a consequence of electromagnetic induction.
eddy current damping — The dissipation of the energy of rotation of a disc due to heating caused by induced eddy currents.
Eddy current damping is the process where the kinetic energy of a rotating metal disc in a magnetic field is converted into heat due to the induced eddy currents. According to Lenz's law, these eddy currents create magnetic fields that oppose the motion, thus slowing down the disc.
When using F = BIL sin \theta or F = BQv sin \theta, always check if the angle is 90\degree. If so, state that sin \theta = 1 to show you have considered it, rather than just omitting it.
Definitions Bank
north-seeking pole
The pole of a freely suspended magnet that points to the north.
south-seeking pole
The pole of a freely suspended magnet that points to the south.
magnetic field
A region of space where a magnetic pole experiences a force.
neutral point
A point where there is no resultant magnetic field because two fields are equal in magnitude but opposite in direction.
magnetic flux density
Numerically equal to the force per unit current per unit length on a straight wire placed at right angles to a uniform magnetic field.
+13 more definitions
View all →Command Word Guide
| Define | Provide the precise, mark-scheme definition for terms like 'magnetic flux density' or 'tesla', ensuring all key components (e.g., 'per unit current per unit length', 'at right angles') are included. |
| Explain | For phenomena like the Hall effect or electromagnetic induction, describe the underlying physics, including the forces on charge carriers, charge separation, or the role of changing flux linkage. For Lenz's law, explain how the induced effect opposes the change causing it. |
| Sketch | Draw magnetic field patterns accurately for various current configurations (straight wire, coil, solenoid). Ensure field lines have correct direction arrows, do not cross, and their density reflects field strength. For permanent magnets, lines should emerge from North and enter South. |
| Calculate | Show all working, including the formula used, substitution of values with units, and the final answer with correct units and appropriate significant figures. For circular motion, explicitly state the magnetic force provides the centripetal force. |
+1 more
View all →Common Mistakes
Confusing Fleming's left-hand rule (motor effect) with Fleming's right-hand rule (generator effect).
Remember, the left-hand rule is for determining the direction of force on a current-carrying conductor or moving charge (motor effect), while the right-hand rule is for determining the direction of induced current (generator effect).
Incorrectly applying the angle \theta in F = BIL sin \theta or F = Bqv sin \theta, especially when the conductor/velocity is parallel to the field.
The angle \theta is between the direction of the current/velocity and the magnetic field lines. If they are parallel (\theta = 0\degree) or anti-parallel (\theta = 180\degree), sin \theta = 0, and the force is zero. If they are perpendicular (\theta = 90\degree), sin \theta = 1, and the force is maximum.
Assuming magnetic field lines can cross or touch.
Magnetic field lines never cross or touch. Their density indicates the strength of the magnetic field, and their direction indicates the direction of the force on a north-seeking pole.
+3 more
View all →This chapter explores alternating currents and voltages, defining their characteristics and mathematical representation. It covers power calculations in AC circuits using root-mean-square values and details the processes of rectification and smoothing to convert AC into direct current.
direct current — A steady current in one direction.
Unlike alternating current, a direct current maintains a constant direction of flow. This is the type of current typically supplied by batteries.
alternating current — A current or voltage that reverses its direction regularly and is usually sinusoidal.
Alternating current (AC) is characterized by its periodic change in direction and magnitude, typically following a sine wave pattern. This is the standard form of electricity supplied to homes and businesses.
period — The time T taken for one complete cycle of the alternating current.
The period is a fundamental characteristic of an alternating current or voltage, representing the duration for one full oscillation before the pattern repeats. It is measured in seconds.
frequency — The number of complete cycles per unit time.
Frequency quantifies how often an alternating current or voltage completes a full cycle within a given time, usually one second. It is the reciprocal of the period and is measured in Hertz (Hz).
peak value — I0 or V0, the amplitude of the oscillating current or voltage.
The peak value represents the maximum instantaneous value reached by the current or voltage during a cycle. It is the amplitude of the sinusoidal waveform.
peak-to-peak value — 2I0 or 2V0, or twice the amplitude.
The peak-to-peak value is the total range of the alternating current or voltage, from its maximum positive value to its maximum negative value. It is simply twice the peak value.
Alternating current equation
This equation represents a sinusoidally alternating current, where I is the instantaneous current at time t, I₀ is the peak current, and ω is the angular frequency.
Alternating voltage equation
This equation represents a sinusoidally alternating voltage, where V is the instantaneous voltage at time t, V₀ is the peak voltage, and ω is the angular frequency.
Period in terms of angular frequency
This formula relates the period (T) of an oscillation to its angular frequency (ω).
Frequency in terms of period
Frequency (f) is the reciprocal of the period (T).
Frequency in terms of angular frequency
This formula connects the frequency (f) of an oscillation to its angular frequency (ω).
Alternating currents and voltages are characterized by their periodic nature, typically following a sinusoidal pattern. Key characteristics include the period (T), which is the time for one complete cycle, and the frequency (f), which is the number of cycles per unit time. The maximum value reached by the current or voltage is known as the peak value (I₀ or V₀), while the peak-to-peak value is twice this amplitude.
A cathode-ray oscilloscope (CRO) is an essential tool for visualizing and measuring alternating currents and voltages. The period and frequency can be determined by observing the waveform on the screen and using the time-base control setting. Similarly, the peak voltage can be found by measuring the amplitude of the trace and applying the Y-amplifier control setting.
Students often confuse the peak value (I₀ or V₀) with the peak-to-peak value (2I₀ or 2V₀). Remember that the peak value is the amplitude, while the peak-to-peak value is twice the amplitude.
Instantaneous power in a resistor
This formula gives the power dissipated at any instant in a resistor when a sinusoidal alternating current flows through it.
Mean power in a resistor
For a sinusoidal alternating current, the mean power dissipated in a resistive load is half the maximum power.
Students often assume the average value of an alternating current is non-zero, leading to incorrect power calculations. Remember that the average value of a sinusoidal AC over a complete cycle is zero, which is why RMS values are used for power.
root-mean-square — That value of the direct current or direct voltage that would produce thermal energy at the same rate in a resistor.
The root-mean-square (r.m.s.) value provides an effective measure of an alternating current or voltage. It is equivalent to the steady direct current or voltage that would dissipate the same average power in a resistive load.
Mean square current
This formula gives the average value of the square of the current for a sinusoidal AC.
Mean square voltage
This formula gives the average value of the square of the voltage for a sinusoidal AC.
RMS current
This formula relates the root-mean-square (RMS) current to the peak current (I₀) for a sinusoidal alternating current.
RMS voltage
This formula relates the root-mean-square (RMS) voltage to the peak voltage (V₀) for a sinusoidal alternating voltage.
In a resistive AC circuit, the instantaneous power varies with time. The mean power dissipated in a resistive load for a sinusoidal alternating current is half the maximum power. To effectively compare AC with DC in terms of power dissipation, root-mean-square (r.m.s.) values are used. The r.m.s. value of an AC is the equivalent DC value that would produce thermal energy at the same average rate in a resistor. For sinusoidal AC, the r.m.s. current and voltage are related to their peak values by a factor of 1/√2.
Students often use peak values instead of RMS values for power calculations in domestic supply contexts. Remember that any stated mains voltage (e.g., 230V) is typically the RMS value, and RMS values must be used for calculating mean power.
A common mistake is misunderstanding the relationship between RMS and peak values, for example, forgetting the √2 factor or incorrectly multiplying/dividing by 2. Always use I_rms = I₀/√2 and V_rms = V₀/√2.
In calculations, assume any stated mains voltage (e.g., 230V) is the r.m.s. value unless explicitly told it is the peak value.
rectification — The process of converting an alternating current into a direct current.
Rectification is a crucial process that transforms the bidirectional flow of alternating current into a unidirectional flow, which is characteristic of direct current. This is typically achieved using diodes.
half-wave rectification — A process where the output voltage across a resistor consists only of the positive half-cycles of the input voltage, rejecting the negative part.
Half-wave rectification uses a single diode to allow only one half-cycle (e.g., the positive half) of the AC input to pass through to the load, effectively blocking the other half-cycle. This results in a pulsating DC output.
full-wave rectification — A process that makes use of the negative half-cycles of the input and reverses their polarity.
Full-wave rectification converts both the positive and negative half-cycles of the AC input into a pulsating DC output of a single polarity. This is typically achieved using a bridge rectifier circuit, which inverts the negative half-cycles.
Rectification is the conversion of alternating current to direct current. Half-wave rectification, achieved with a single diode, allows only one half-cycle of the AC input to pass, blocking the other. Full-wave rectification, typically using a bridge rectifier with four diodes, utilizes both half-cycles by inverting the negative half-cycles, resulting in a more continuous, though still pulsating, unidirectional output.
When asked to 'distinguish graphically' between half-wave and full-wave rectification, draw both output waveforms on separate, clearly labelled axes (V vs t), ensuring the full-wave output has twice the frequency of the half-wave output.
For questions asking you to 'explain' the action of a rectifier, always include a clear circuit diagram and sketch the input and output voltage waveforms.
smoothing — The process of reducing fluctuations in the unidirectional output voltage by inserting a capacitor across the output terminals of a rectifier circuit.
Smoothing aims to reduce the 'ripple' in the rectified DC output, making it more constant. This is achieved by connecting a capacitor in parallel with the load resistor, which charges during voltage peaks and discharges slowly when the voltage drops, filling in the gaps.
ripple — The magnitude of the variation of the voltage or current that is superimposed on the direct voltage or current.
Ripple refers to the small, unwanted AC component that remains in the DC output after rectification and partial smoothing. A smaller ripple indicates a more stable and 'smoother' DC output.
After rectification, the output voltage is unidirectional but still pulsates. A capacitor connected in parallel with the load resistor can smooth this output. The capacitor charges when the rectified voltage is high and discharges slowly through the load resistor when the voltage falls, thereby reducing the voltage fluctuation, known as ripple. The effectiveness of smoothing depends on the value of the capacitance and the load resistance; a larger capacitance and a larger load resistance generally lead to less ripple.
Students often misunderstand the relationship between the time constant (RC) and ripple magnitude for smoothing. Remember that a larger time constant (due to larger C or R) leads to less ripple because the capacitor discharges more slowly.
To explain smoothing, state that the capacitor charges when the rectified voltage is high and discharges slowly through the load resistor when the voltage falls, reducing the voltage fluctuation (ripple).
When reading from a C.R.O. trace, always state the formula used (e.g., Peak Voltage = vertical divisions × Y-gain) and show your readings from the graticule to secure method marks.
Definitions Bank
direct current
A steady current in one direction.
alternating current
A current or voltage that reverses its direction regularly and is usually sinusoidal.
period
The time T taken for one complete cycle of the alternating current.
frequency
The number of complete cycles per unit time.
peak value
I0 or V0, the amplitude of the oscillating current or voltage.
+7 more definitions
View all →Command Word Guide
| Distinguish graphically | Draw both output waveforms (e.g., half-wave and full-wave rectification) on separate, clearly labelled axes (V vs t), ensuring the full-wave output has twice the frequency of the half-wave output. |
| Explain | For rectification, include a clear circuit diagram and sketch the input and output voltage waveforms. For smoothing, describe the capacitor charging when voltage is high and discharging slowly through the load when voltage falls, reducing ripple. |
| Calculate | Show all working steps, including formulas used and substitution of values. Pay attention to units and significant figures. For power calculations, ensure RMS values are used. |
Common Mistakes
Confusing peak value with peak-to-peak value.
The peak value (I₀ or V₀) is the amplitude, while the peak-to-peak value (2I₀ or 2V₀) is twice the amplitude.
Assuming the average value of an alternating current is non-zero.
The average value of a sinusoidal AC over a complete cycle is zero. RMS values are used for power calculations because they represent the effective DC equivalent.
Not distinguishing between instantaneous power and mean power in AC circuits.
Instantaneous power varies with time (P = I₀²R sin²ωt), while mean power is the average over a cycle, which for a sinusoidal AC is half the maximum power (⟨P⟩ = ½ I₀²R).
+3 more
View all →This chapter explores quantum physics, revealing the particulate nature of electromagnetic radiation through photons and the wave-particle duality of matter. It details the photoelectric effect, discrete atomic energy levels, and the formation of characteristic line spectra.
Planck constant — A constant, h, with a value of 6.63 × 10–34 J s, used in quantum theory to relate the energy of a photon to its frequency.
The Planck constant is a fundamental constant in quantum mechanics. It quantifies the relationship between the energy of a photon and its frequency, forming the basis for understanding the discrete nature of energy at the atomic and subatomic levels.
photons — Energy packets that light radiation consists of, as proposed by Albert Einstein.
Light, and indeed all electromagnetic radiation, is not a continuous wave but rather consists of discrete packets of energy called photons. Each photon carries a specific amount of energy, which depends on its frequency.
photon — The special name given to a quantum of energy when the energy is in the form of electromagnetic radiation.
A photon is the fundamental particle of light and all other forms of electromagnetic radiation. It represents a single quantum, or discrete unit, of electromagnetic energy.
electronvolt (eV) — The energy gained by an electron when it is accelerated from rest in a vacuum through a potential difference of one volt.
The electronvolt is a convenient unit of energy often used in atomic and nuclear physics. It represents the kinetic energy acquired by an electron when it moves through an electric potential difference of one volt.
Photon energy
This formula relates the energy (E) of a photon directly to its frequency (f), with h being the Planck constant.
Photon energy (wavelength form)
This alternative formula expresses photon energy (E) in terms of its wavelength (\lambda), using the speed of electromagnetic waves (c) and the Planck constant (h).
Photon momentum
This formula calculates the momentum (p) of a photon from its energy (E) and the speed of electromagnetic waves (c).
Electronvolt conversion
This conversion factor is used to switch between electronvolts (eV) and joules (J), which are both units of energy.
Electromagnetic radiation exhibits a particulate nature, meaning it consists of discrete energy packets called photons. The energy of a single photon is directly proportional to its frequency, given by the formula E = hf, where h is the Planck constant. This relationship can also be expressed in terms of wavelength as E = hc/\lambda, where c is the speed of light. Photons also possess momentum, which can be calculated using the formula p = E/c.
photoelectric emission — The release of electrons from the surface of a metal when electromagnetic radiation is incident on its surface.
Photoelectric emission is a phenomenon where electrons are ejected from a metal surface when light shines on it. These emitted electrons are known as photoelectrons, and their release depends on the properties of the incident radiation.
photoelectrons — The electrons emitted during photoelectric emission.
When electromagnetic radiation strikes a metal surface, if certain conditions are met, electrons can be ejected. These specific electrons, which have been released due to the incident photons, are termed photoelectrons.
threshold frequency f0 — The minimum frequency of the incident radiation above which photoemission takes place.
For photoelectric emission to occur, the incident electromagnetic radiation must have a frequency at or above a specific minimum value, known as the threshold frequency. Below this frequency, no electrons will be emitted, regardless of the intensity of the light.
threshold wavelength \lambda0 — The wavelength corresponding to the threshold frequency.
The threshold wavelength is inversely related to the threshold frequency. It represents the maximum wavelength of incident radiation that can cause photoelectric emission from a given metal surface.
work function energy \Phi — The minimum amount of energy necessary for an electron to escape from the surface of a metal.
Every metal has a characteristic work function energy, which is the minimum energy an electron needs to overcome the attractive forces holding it within the metal and escape from its surface. This energy must be supplied by an incident photon for photoelectric emission to occur.
stopping potential — The minimum value of the potential difference necessary to stop the electron flow in a photoelectric experiment, which measures the maximum kinetic energy of the photoelectrons.
In a photoelectric experiment, the stopping potential is the reverse voltage applied that is just sufficient to halt the most energetic photoelectrons. This potential difference provides a direct measure of the maximum kinetic energy of the emitted photoelectrons.
Threshold frequency and work function
This formula defines the relationship between the threshold frequency (f_0) and the work function energy (\Phi) of a metal, using the Planck constant (h).
Einstein photoelectric equation
This equation relates the energy of an incident photon (hf) to the work function (\Phi) and the maximum kinetic energy (1/2 m_e v_max^2) of the emitted photoelectrons.
Einstein photoelectric equation (alternative form)
This is an alternative form of the photoelectric equation, substituting the work function (\Phi) with the product of Planck's constant and the threshold frequency (hf_0).
The photoelectric effect is the emission of electrons, called photoelectrons, from a metal surface when electromagnetic radiation is incident upon it. This effect demonstrates the particulate nature of light. For emission to occur, the incident radiation must have a frequency greater than or equal to the threshold frequency (f_0), which corresponds to the minimum energy required for an electron to escape, known as the work function energy (\Phi). Einstein's photoelectric equation, hf = \Phi + \frac{1}{2}m_e v_{\text{max}}^2, explains this by stating that the energy of an incident photon (hf) is used to overcome the work function and provide the photoelectron with kinetic energy.
Students often incorrectly believe that the rate of emission of photoelectrons is independent of frequency, when it is actually dependent on the rate of photon arrival, which changes with frequency for constant intensity. Also, classical wave theory predicts that photoemission should occur for any frequency if the intensity is high enough or applied for long enough, which contradicts experimental observations of a threshold frequency and instantaneous emission.
When explaining the photoelectric effect, state that it involves a one-to-one interaction between a single photon and a single surface electron to secure marks.
Students often confuse the effect of intensity on photoelectric current (proportional) versus maximum kinetic energy of photoelectrons (independent). Remember that increasing intensity increases the number of photoelectrons per second, while increasing frequency increases their maximum kinetic energy.
When interpreting a graph of KE_max against frequency, identify the gradient as the Planck constant (h), the y-intercept as the negative work function (-\Phi), and the x-intercept as the threshold frequency (f_0).
de Broglie wavelength — The wavelength associated with a moving particle, given by \lambda = h/p.
The de Broglie wavelength describes the wave-like properties of particles. It suggests that any moving particle, not just photons, has an associated wavelength that is inversely proportional to its momentum, demonstrating wave-particle duality.
de Broglie wavelength
This formula calculates the de Broglie wavelength (\lambda) of a moving particle from the Planck constant (h) and the particle's momentum (p).
The concept of wave-particle duality suggests that particles can exhibit wave-like properties, and waves can exhibit particle-like properties. Evidence for the wave nature of particles, specifically electrons, is provided by electron diffraction. When a beam of electrons is passed through a thin crystalline material, they produce a diffraction pattern, similar to how waves would. This phenomenon demonstrates that electrons, which are typically considered particles, also possess a wave nature, with an associated de Broglie wavelength given by \lambda = h/p, where p is the electron's momentum.
For questions on electron diffraction, explicitly state that diffraction is a wave property, and since electrons (particles) show diffraction, they must also have a wave-like nature.
electron energy levels — The specific energies that electrons in an atom can have.
Within an atom, electrons are not free to possess any arbitrary energy. Instead, they are restricted to specific, discrete energy values, known as electron energy levels. These levels are unique to each type of atom.
discrete energy levels — Energy levels where electrons can have only certain specific energy values, but not energies between them.
The term 'discrete' emphasizes that electron energy levels are distinct and separated, like steps on a ladder. Electrons can occupy these specific energy states but cannot exist in the continuous energy values between them.
ground state — The condition where electrons in an atom occupy the lowest energy levels available.
An atom is in its ground state when its electrons are arranged in the lowest possible energy configuration. This is the most stable state for the atom.
excited state — The condition where an electron has absorbed energy and been promoted to a higher energy level.
When an electron absorbs energy, it can jump from a lower energy level to a higher one, placing the atom in an excited state. This state is unstable, and the electron will eventually return to a lower level.
electron transition — The movement of an electron between energy levels.
An electron transition occurs when an electron moves from one discrete energy level to another. This movement involves either the absorption or emission of a photon with energy precisely equal to the energy difference between the levels.
ionisation energy — The energy required to completely remove an electron in the ground state from an atom.
Ionisation energy is the minimum energy needed to overcome the attractive forces of the nucleus and completely remove an electron from an atom, specifically when the electron is in its lowest energy state.
photo-ionisation — The process where ionisation energy is supplied by the absorption of a photon.
Photo-ionisation is a specific type of ionisation where the energy required to remove an electron from an atom is provided by the absorption of a single photon.
Energy of emitted photon (electron transition)
This formula relates the energy (hf) of an emitted photon to the positive difference between a higher energy level (E_2) and a lower energy level (E_1) during an electron transition.
Wavelength of emitted radiation (electron transition)
This formula calculates the wavelength (\lambda) of emitted radiation based on the energy difference (\Delta E) between electron energy levels, using the Planck constant (h) and the speed of light (c).
Electrons in isolated atoms can only occupy specific, discrete energy levels. These are not continuous, meaning electrons can only have certain specific energy values and not energies in between. The lowest available energy level is known as the ground state. When an electron absorbs energy, it can be promoted to a higher energy level, entering an excited state. These electron transitions, or movements between energy levels, are fundamental to understanding atomic spectra.
continuous spectrum — A band of different colours (and wavelengths) between two limits, such as that produced by white light.
A continuous spectrum contains all wavelengths of light within a given range, appearing as a smooth progression of colours without any gaps. This is typically produced by hot, dense objects like the filament of an incandescent bulb.
emission spectrum — A spectrum produced by the emission of light from a source, which can be continuous or consist of bright lines.
An emission spectrum shows the wavelengths of light emitted by a source. Depending on the source, it can be a continuous band of colours or, more characteristically for gases, a series of distinct bright lines.
emission line spectrum — A spectrum consisting of a number of separate colours, each seen as a line, characteristic of the gas in a discharge tube.
An emission line spectrum is a unique 'fingerprint' of an element, consisting of distinct bright lines at specific wavelengths. These lines are produced when excited electrons in isolated atoms transition from higher to lower discrete energy levels, emitting photons of precise energies.
absorption spectrum — A spectrum where light of certain wavelengths is missing, appearing as dark lines, due to absorption by a gas.
An absorption spectrum is formed when light from a continuous source passes through a cool gas. The gas atoms absorb photons of specific energies, corresponding to their electron energy level transitions, resulting in dark lines at those wavelengths within the continuous spectrum.
spectroscopy — The study of spectra.
Spectroscopy is the scientific technique of studying the interaction between matter and electromagnetic radiation. It involves analyzing the spectra of light emitted or absorbed by substances to determine their composition, temperature, and other physical properties.
spectrometers — Instruments used to measure the wavelengths of spectra.
Spectrometers are optical instruments designed to measure the intensity of light at different wavelengths. They are crucial tools in spectroscopy for analyzing emission and absorption spectra.
Emission line spectra are formed when electrons in excited atoms transition from higher discrete energy levels to lower ones. As an electron falls to a lower energy state, it emits a photon whose energy (hf) is exactly equal to the energy difference between the two levels (E_2 - E_1). Since these energy levels are discrete, only specific photon energies, and thus specific frequencies and wavelengths, are emitted, resulting in distinct bright lines. Conversely, absorption line spectra are formed when a continuous spectrum of light passes through a cool gas. Electrons in the gas atoms absorb photons with energies precisely matching the energy differences between their discrete energy levels, causing them to jump to higher excited states. This absorption removes specific wavelengths from the continuous spectrum, appearing as dark lines against a continuous background.
To explain the formation of line spectra, you must mention 'discrete energy levels' and 'electron transitions' between them.
In calculations for photon energy from electron transitions (hf = E_2 - E_1), remember that energy levels are negative; the energy difference (\Delta E) must be a positive value.
Always convert energy units between Joules (J) and electronvolts (eV) when required by the question, using 1 eV = 1.60 \times 10^{-19} J. For the de Broglie wavelength \lambda = h/p, remember that momentum (p) is the product of mass and velocity (p = mv).
Definitions Bank
Planck constant
A constant, h, with a value of 6.63 × 10–34 J s, used in quantum theory to relate the energy of a photon to its frequency.
photons
Energy packets that light radiation consists of, as proposed by Albert Einstein.
electronvolt (eV)
The energy gained by an electron when it is accelerated from rest in a vacuum through a potential difference of one volt.
photon
The special name given to a quantum of energy when the energy is in the form of electromagnetic radiation.
photoelectric emission
The release of electrons from the surface of a metal when electromagnetic radiation is incident on its surface.
+19 more definitions
View all →Command Word Guide
| Explain | For the photoelectric effect, explicitly mention the one-to-one interaction between a photon and an electron, the role of work function, and the threshold frequency. For line spectra, refer to discrete energy levels and electron transitions. |
| Describe | For electron diffraction, describe the observation of a diffraction pattern and link it to the wave nature of electrons. For spectra, describe the appearance of continuous, emission line, and absorption line spectra. |
| Calculate | Ensure correct use of formulas (E=hf, E=hc/\lambda, p=E/c, hf = \Phi + KE_max, \lambda = h/p, hf = E_2 - E_1) and appropriate unit conversions, especially between Joules and electronvolts. |
| State | Provide concise, accurate definitions for terms like Planck constant, photon, work function, threshold frequency, and de Broglie wavelength. |
Common Mistakes
Believing the rate of photoelectron emission is independent of frequency.
The rate of emission depends on the rate of photon arrival, which changes with frequency for constant intensity.
Applying classical wave theory to explain the photoelectric effect.
Classical wave theory incorrectly predicts photoemission for any frequency if intensity is high enough; quantum theory with photons is required.
Confusing the effect of intensity versus frequency on photoelectrons.
Increasing intensity increases the number of photoelectrons (current), while increasing frequency increases their maximum kinetic energy.
+2 more
View all →This chapter explores nuclear physics, beginning with the fundamental concepts of mass defect and binding energy, which are crucial for understanding nuclear stability and energy release. It then delves into nuclear fusion and fission, explaining their mechanisms and energy generation. Finally, the chapter covers radioactive decay, detailing its random and spontaneous nature, and providing mathematical models for activity, decay constant, and half-life.
mass defect — The difference between the total mass of the separate nucleons and the combined mass of the nucleus.
When nucleons (protons and neutrons) combine to form a nucleus, the total mass of the nucleus is less than the sum of the masses of the individual nucleons. This 'missing' mass is known as the mass defect.
Students often confuse mass defect with the mass of the nucleus itself. Remember that mass defect is the difference in mass, not the total mass of the nucleus.
binding energy — The energy equivalent of the mass defect of a nucleus. It is the energy required to separate to infinity all the nucleons of a nucleus.
The mass defect is converted into energy according to Einstein's mass-energy equivalence principle. This energy, known as binding energy, is released when a nucleus forms and must be supplied to break the nucleus apart into its constituent nucleons.
Students often misunderstand that binding energy is released when a nucleus forms and required to separate it. It's the energy holding the nucleus together.
Binding energy per nucleon — The total energy needed to completely separate all the nucleons in a nucleus divided by the number of nucleons in the nucleus.
This value provides a measure of the stability of a nucleus. A higher binding energy per nucleon indicates a more stable nucleus, as more energy is required to remove each nucleon.
Mass-energy equivalence
This fundamental equation relates energy (E) and mass (m), or a change in mass, where c is the speed of light. It is used to convert between mass and energy.
Energy released in nuclear reactions
This formula is specifically used to calculate the energy released in nuclear reactions, where \Delta m represents the change in mass (mass defect) during the reaction.
Students often incorrectly apply E=mc^2 without converting mass to kilograms or energy to joules. Always ensure consistent SI units.
The variation of binding energy per nucleon with nucleon number is a key indicator of nuclear stability. Nuclei with higher binding energy per nucleon are more stable. The peak stability occurs around a nucleon number of A ≈ 56 (Iron). Nuclear reactions, such as fusion and fission, release energy when they result in products with a higher binding energy per nucleon, moving the system towards a more stable state.
When sketching the binding energy per nucleon graph, label axes, mark the peak at A ≈ 56, and clearly indicate the regions where fusion and fission occur.
Nuclear fusion — Occurs when two light nuclei combine to form a nucleus of greater mass.
Nuclear fusion typically involves light nuclei, which combine to form a heavier nucleus. This process releases energy because the resulting heavier nucleus has a higher binding energy per nucleon than the initial light nuclei, moving towards greater stability.
Nuclear fission — The splitting of a heavy nucleus into two lighter nuclei of approximately the same mass.
Nuclear fission involves a heavy nucleus splitting into two lighter nuclei. This process also releases energy, as the lighter product nuclei have a higher binding energy per nucleon compared to the original heavy nucleus, leading to a more stable configuration.
To 'explain' energy release in fission or fusion, you must state that the products have a higher binding energy per nucleon, meaning the system has moved to a more stable state.
Nuclear reactions, including fusion and fission, can be represented by nuclear equations. In these equations, it is crucial to conserve both the nucleon number (total number of protons and neutrons) and the charge (total number of protons) on both sides of the reaction. The energy released in these reactions is directly related to the change in mass, calculated using E = c^2Δm.
Students often fail to conserve nucleon number and charge in nuclear reaction equations. Always check that the sums of the top and bottom numbers are equal on both sides.
In energy release calculations (E = c^2Δm), Δm is always (total mass of reactants - total mass of products). Ensure this order to get a positive energy release.
Radioactive decay — A random process in that it cannot be predicted which nucleus will decay next. There is a constant probability that a nucleus will decay in any fixed period of time.
Radioactive decay is the process by which an unstable atomic nucleus loses energy by emitting radiation. It is inherently unpredictable for individual nuclei, but statistically, a constant probability of decay exists for any given nucleus over a fixed period.
spontaneous process — Radioactive decay is a spontaneous process because it is not affected by any external factors, such as temperature or pressure.
The rate of radioactive decay is intrinsic to the nucleus itself and cannot be influenced by external physical or chemical conditions. This means that factors like temperature, pressure, or chemical bonding do not alter the decay rate.
Be prepared to clearly distinguish between 'random' (cannot predict which nucleus decays next) and 'spontaneous' (decay rate is unaffected by external conditions).
Activity — The number of nuclear decays occurring per unit time in the source.
Activity is a measure of the rate at which a radioactive source decays. It is directly proportional to the number of undecayed nuclei present in the sample at any given time.
decay constant λ — The probability per unit time of the decay of a nucleus.
The decay constant (\lambda) quantifies the likelihood of a nucleus decaying in a given time interval. A larger decay constant indicates a higher probability of decay and thus a faster decay rate.
Rate of decay (differential form)
This equation describes the rate of change of the number of undecayed nuclei (N) over time (t). The negative sign indicates that the number of nuclei decreases as time progresses.
Students often misinterpret the negative sign in dN/dt = -\lambda N as implying a negative number of nuclei or activity. The negative sign simply indicates a decrease in N over time; N and A are always positive.
Activity
This formula relates the measurable activity (A) of a radioactive source to the decay constant (\lambda) and the number of undecayed nuclei (N) present.
Exponential decay
This general exponential decay law describes how the activity (A), number of undecayed nuclei (N), or corrected count rate (C) decreases over time (t). x_0 is the initial value at t=0.
half-life — The time taken for the number of undecayed nuclei to be reduced to half its original number.
Half-life is a characteristic property of a radioactive isotope, representing the time required for half of the radioactive nuclei in a sample to undergo decay. It is inversely related to the decay constant.
Half-life and decay constant relationship
This equation provides a direct relationship between the half-life (t_{1/2}) of a nuclide and its decay constant (\lambda).
The exponential decay equation x = x_0e^(-\lambda t) can be used for the number of undecayed nuclei (N), activity (A), or the corrected count rate (C). Ensure you use consistent units for time and decay constant.
For 'define' questions, be precise. For binding energy, state it's the 'energy required to separate to infinity all the nucleons of a nucleus'.
Always account for background radiation when measuring count rates from radioactive sources. Subtract the background count from your readings before any calculations.
Definitions Bank
mass defect
The difference between the total mass of the separate nucleons and the combined mass of the nucleus.
binding energy
The energy equivalent of the mass defect of a nucleus. It is the energy required to separate to infinity all the nucleons of a nucleus.
Binding energy per nucleon
The total energy needed to completely separate all the nucleons in a nucleus divided by the number of nucleons in the nucleus.
Nuclear fusion
Occurs when two light nuclei combine to form a nucleus of greater mass.
Nuclear fission
The splitting of a heavy nucleus into two lighter nuclei of approximately the same mass.
+5 more definitions
View all →Command Word Guide
| Define | Provide precise, mark-scheme definitions for terms like mass defect, binding energy, half-life, and decay constant. Avoid colloquial language. |
| Explain | For energy release in fission or fusion, explain that the products have a higher binding energy per nucleon, leading to a more stable state and energy release. For radioactive decay, explain both its random and spontaneous nature. |
| Sketch | For the binding energy per nucleon graph, ensure axes are labelled correctly (Binding Energy per Nucleon vs. Nucleon Number), the peak is clearly marked around A=56, and regions for fusion and fission are indicated. |
| Calculate | Show all working, including formula substitution and unit conversions (e.g., u to kg, MeV to J). Pay attention to significant figures. |
Common Mistakes
Confusing mass defect with the mass of the nucleus itself.
Mass defect is the difference between the total mass of separate nucleons and the combined mass of the nucleus, not the nucleus's total mass.
Incorrectly applying E=mc^2 without converting mass to kilograms or energy to joules.
Always convert mass to kilograms (kg) and ensure energy is in joules (J) when using E=mc^2 for calculations.
Not understanding that binding energy is released when a nucleus forms and required to separate it.
Binding energy is the energy released when nucleons bind to form a nucleus, and it is the energy that must be supplied to break the nucleus apart.
+3 more
View all →This chapter covers the principles and applications of medical imaging techniques, specifically ultrasound, X-rays, and PET scanning. It details how these methods are generated, detected, and used to diagnose internal body structures, focusing on concepts like acoustic impedance, attenuation, and radioactive tracers.
transducer — Any device that converts energy from one form to another.
In medical physics, a transducer is crucial for both generating and detecting waves. For instance, a piezo-electric transducer converts electrical energy into ultrasound waves and vice versa, enabling diagnostic imaging.
specific acoustic impedance — The product of the density ρ of the medium and the speed c of the wave in the medium.
Specific acoustic impedance (Z = ρc) quantifies a medium's resistance to the passage of sound waves. A significant difference in acoustic impedance between two media at a boundary leads to a large reflection of ultrasound, which is fundamental for imaging internal structures.
intensity reflection coefficient — The ratio of the reflected intensity IR to the incident intensity I0 for a wave incident normally on a boundary between two media.
This coefficient, given by IR/I0 = (Z1 − Z2)²/(Z1 + Z2)², indicates the proportion of ultrasound intensity reflected at a boundary. A higher coefficient means more reflection, allowing for clearer detection of interfaces between different tissues.
linear absorption (or attenuation) coefficient — A constant for the medium that depends not only on the medium itself but also on the frequency of the ultrasound, used in the exponential attenuation equation I = I0e−μx.
This coefficient (μ) describes how effectively a medium absorbs or scatters ultrasound waves. A higher μ means greater attenuation, leading to a reduction in wave intensity as it travels through the tissue.
Specific acoustic impedance
Used for a medium through which a wave travels, where Z is specific acoustic impedance, ρ is density, and c is the speed of sound.
Intensity reflection coefficient
For a wave incident normally on a boundary between two media, where I_R is reflected intensity, I_0 is incident intensity, and Z_1, Z_2 are specific acoustic impedances of the two media.
Attenuation of ultrasound/X-rays
Applies to a parallel beam of ultrasound or X-rays, where I is transmitted intensity, I_0 is incident intensity, μ is the linear absorption coefficient, and x is the thickness of the absorber.
Ultrasound waves are generated and detected using a piezo-electric transducer. This device utilizes piezo-electric crystals that change shape when a potential difference is applied across them, producing ultrasound waves. Conversely, when these crystals are deformed by returning ultrasound waves, they generate an electromotive force (e.m.f.), allowing for detection. The reflection of these ultrasound pulses at boundaries between different tissues provides diagnostic information about internal structures.
Students might not appreciate the importance of matching acoustic impedances (e.g., using jelly) for effective ultrasound transmission into the body, leading to significant reflection at the air-skin boundary. Remember that a coupling gel is needed because the large acoustic impedance mismatch between air and skin would otherwise reflect almost all the incident ultrasound.
When calculating the intensity reflection coefficient, IR/I₀ = (Z₂ - Z₁)² / (Z₂ + Z₁)², ensure you square both the numerator and the denominator terms.
A-scan — A technique where a short pulse of ultrasound is transmitted into the body, and reflected pulses are detected and displayed on a cathode-ray oscilloscope as vertical lines, with time-base calibrated to determine distances between boundaries.
An A-scan provides one-dimensional information about the depth of tissue boundaries. The time taken for the reflected pulses to return is used to calculate the distance to the reflecting surfaces, offering insights into the size and position of internal structures along a single line.
B-scan — A series of A-scans, all taken from different angles, forming a two-dimensional image on a screen.
A B-scan builds upon the A-scan technique by compiling multiple one-dimensional scans from various angles. This allows for the creation of a comprehensive two-dimensional image, providing a cross-sectional view of internal organs and tissues.
Students may overlook that the exponential attenuation formula I = I0e−μx applies only to parallel beams; divergent beams experience additional intensity decrease without absorption. Remember that the attenuation formula I = I₀e⁻μx is for a parallel beam; it does not account for the intensity decrease of a divergent beam due to spreading out.
For attenuation calculations using I = I₀e⁻μx, always check that the units of the distance 'x' and the attenuation coefficient 'μ' are consistent (e.g., cm and cm⁻¹).
X-rays are produced by the electron bombardment of a metal target. High-energy electrons are accelerated across a potential difference and then strike a metal anode, causing the emission of X-ray photons. The minimum wavelength of the X-rays produced is inversely proportional to the accelerating potential difference, given by λ₀ = hc/eV, where V is the accelerating p.d.
Minimum wavelength of X-rays
Derived from the kinetic energy of an electron being fully converted to a single X-ray photon's energy, where λ₀ is minimum wavelength, h is Planck constant, c is speed of light, e is electron charge, and V is accelerating potential difference.
Bremmstrahlung radiation — Electromagnetic radiation emitted whenever a charged particle is accelerated, also known as 'braking' (slowing down) radiation.
Bremmstrahlung radiation is a continuous spectrum of X-rays produced when electrons are decelerated by the electric fields of the target nuclei. This 'braking radiation' accounts for a significant portion of the X-ray output in an X-ray tube.
Students may confuse the continuous X-ray spectrum with characteristic peaks, not understanding that the continuous spectrum arises from varying decelerations and the cut-off wavelength from a single, maximum deceleration event. Be careful not to confuse the minimum X-ray wavelength (from a single maximum energy loss event) with the continuous spectrum of wavelengths produced by varying electron decelerations.
contrast — A measure of the difference in the degree of blackening between different regions of an X-ray image, indicating how clearly outlined bones and other organs are.
Contrast is vital for distinguishing different tissues in an X-ray image. It is achieved through the differential attenuation of X-rays by various materials, with denser structures like bone absorbing more X-rays and appearing lighter, while less dense tissues appear darker.
linear attenuation coefficient — A constant that is dependent on the medium and on the energy of the X-ray photons, known as the linear absorption coefficient of the medium, used in the exponential attenuation equation I = I0e−μx.
Similar to ultrasound, X-rays also undergo attenuation as they pass through matter. The linear attenuation coefficient (μ) for X-rays quantifies this reduction in intensity and depends on both the material and the energy of the X-ray photons, influencing the contrast in an X-ray image.
Students might not understand why X-ray images are 'shadows' and how contrast is achieved, especially with contrast agents like barium sulfate. To explain 'contrast' in an X-ray image, state that it arises from the differential attenuation of X-rays by tissues with different attenuation coefficients (μ), such as bone and soft tissue.
Computed tomography (CT) scanning produces a 3D image of an internal structure. This is achieved by first combining multiple X-ray images taken in the same section from different angles to obtain a 2D image of that section. This process is then repeated along an axis, and the 2D images of multiple sections are combined to construct a comprehensive 3D representation of the internal structure.
Tomography — A technique whereby a three-dimensional image or 'slice' through the body may be obtained.
Tomography is the overarching principle behind CT scanning, allowing for the creation of detailed cross-sectional images. By taking multiple projections from different angles, a computer can reconstruct a 'slice' of the body, revealing internal structures in three dimensions.
voxels — Small units into which a section or 'slice' through a cube is divided in CT scanning, each absorbing the X-ray beam to a different extent.
Voxels are the three-dimensional equivalent of pixels, representing discrete volumes within the reconstructed image. The varying X-ray absorption by each voxel contributes to the detailed 3D image, allowing for differentiation of tissues.
pixel — A number representing the intensity transmitted through each voxel alone.
In the context of CT scanning, a pixel in a 2D slice represents the average X-ray attenuation within a corresponding voxel. These pixel values are then used by the computer to construct the final 3D image.
Students might not grasp the distinction between a 2D X-ray image and a 3D CT scan, particularly how multiple X-ray images are combined computationally to build a 3D representation. Don't confuse a 2D X-ray image, which is a 'shadow' based on total attenuation, with a 3D CT scan, which is computationally constructed from many X-ray slices.
For questions on CT scanning, explain that it combines multiple X-ray images from different angles to create a 2D slice, and then combines multiple slices to form a 3D image.
PET scanning utilizes radioactive tracers that decay by β+ (positron) emission. These tracers are introduced into the body and absorbed by the tissue being studied. When a positron is emitted, it travels a short distance and then annihilates with an electron in the tissue, producing a pair of gamma-ray photons.
tracers — Chemical compounds in which one or more of its atoms have been replaced by radioactive nuclei of the same element that can then be used to locate or follow the progress of the compound in living tissues.
Tracers are essential for PET scanning, allowing medical professionals to observe metabolic processes or blood flow. By incorporating a radioactive isotope into a biologically active molecule, its distribution and concentration within the body can be tracked.
Annihilation — Occurs when a particle interacts with its antiparticle, converting their mass into energy.
In PET scanning, the annihilation of a positron (antiparticle of an electron) with an electron is a key event. This interaction results in the complete conversion of their mass into energy, which is released in the form of two gamma-ray photons.
Energy of electron-positron annihilation
Used to calculate the total energy released when particles and antiparticles annihilate, where E is total energy, Δm is total mass converted, and c is speed of light.
Energy of a photon
Relates photon energy to its frequency, where E is energy, h is Planck constant, and f is frequency.
The two gamma-ray photons produced during electron-positron annihilation travel in opposite directions to conserve momentum. These photons travel outside the body and are detected by the PET scanner. By processing the arrival times of these gamma-ray photons, the location of the annihilation event, and thus the concentration of the tracer in the tissue, can be precisely determined to create an image.
Students might not understand why PET scanning requires two gamma-ray photons to be emitted in opposite directions, which is crucial for momentum conservation and localization of the annihilation event. Avoid stating that only one photon is produced in PET annihilation; two gamma photons are always produced, travelling in opposite directions to conserve momentum.
When calculating the energy of a single gamma photon from electron-positron annihilation, use the rest mass of just one particle (E = mₑc²), as the total energy (2mₑc²) is shared equally between the two photons.
In descriptions of PET scanning, clearly state that the scanner detects pairs of gamma photons arriving simultaneously on opposite sides to locate the annihilation event along a line.
To excel in medical physics questions, ensure you can clearly explain the production, detection, and diagnostic applications of ultrasound, X-rays, and PET scanning, including relevant formulas and their conditions of use.
Definitions Bank
transducer
Any device that converts energy from one form to another.
specific acoustic impedance
The product of the density ρ of the medium and the speed c of the wave in the medium.
intensity reflection coefficient
The ratio of the reflected intensity IR to the incident intensity I0 for a wave incident normally on a boundary between two media.
linear absorption (or attenuation) coefficient
A constant for the medium that depends not only on the medium itself but also on the frequency of the ultrasound, used in the exponential attenuation equation I = I0e−μx.
A-scan
A technique where a short pulse of ultrasound is transmitted into the body, and reflected pulses are detected and displayed on a cathode-ray oscilloscope as vertical lines, with time-base calibrated to determine distances between boundaries.
+9 more definitions
View all →Command Word Guide
| Explain | For 'Explain' questions, provide clear cause-and-effect reasoning. For example, when explaining contrast in X-rays, link it to differential attenuation by tissues with different linear attenuation coefficients. |
| Describe | For 'Describe' questions, outline the process or phenomenon in a logical sequence. For instance, describe the production of X-rays by detailing electron acceleration and bombardment of a metal target. |
| Calculate | For 'Calculate' questions, show all working steps, including formula substitution and units. Ensure consistency in units, especially for attenuation calculations. |
| Compare | For 'Compare' questions, highlight both similarities and differences between two concepts. For example, compare X-ray imaging and CT scanning by noting the dimensionality of the image and the method of construction. |
Common Mistakes
Confusing the continuous X-ray spectrum with characteristic peaks.
The continuous spectrum arises from varying decelerations of electrons (Bremmstrahlung radiation), while characteristic peaks are due to electron transitions within target atoms. The cut-off wavelength corresponds to the maximum deceleration event.
Not understanding the role of coupling gel in ultrasound.
A coupling gel is essential to reduce the large acoustic impedance mismatch between air and skin, which would otherwise cause almost total reflection of ultrasound at the body surface, preventing effective transmission.
Applying the exponential attenuation formula I = I0e−μx to divergent beams without considering additional intensity decrease.
This formula is strictly for parallel beams. For divergent beams, intensity also decreases due to the spreading out of the wave, in addition to absorption.
+3 more
View all →This chapter covers fundamental concepts in astronomy and cosmology, including how to determine cosmic distances using standard candles, estimate stellar properties like temperature and radius, and understand the evidence for an expanding Universe through redshift and Hubble's law, leading to the Big Bang theory.
Luminosity — The total power (the total energy emitted per unit time) of the object.
Luminosity represents the total energy radiated by a star per unit time, essentially its intrinsic brightness. It is a fundamental property of a star, independent of its distance from an observer.
Radiant flux intensity — The radiant power per unit area passing normally through unit area.
Radiant flux intensity, also known as apparent brightness, is the power received per unit area at a specific distance from a source. It decreases with distance from the source, following an inverse square law.
Students often confuse luminosity (total power emitted) with radiant flux intensity (power per unit area observed). Remember that luminosity is an intrinsic property of the star, while radiant flux intensity depends on both the star's luminosity and the observer's distance from it.
Radiant flux intensity
This formula applies for a source emitting uniformly in all directions, assuming no energy absorption between the source and the observer. 'd' is the distance from the source.
Standard candle — A class of stellar object which has a known luminosity and whose distance can be determined by calculation using its radiant flux intensity (apparent brightness) and luminosity.
A standard candle is an astronomical object with a reliably known luminosity. By measuring its radiant flux intensity (apparent brightness) from Earth and knowing its intrinsic luminosity, its distance can be calculated using the inverse square law for radiant flux intensity.
Cepheid variable star — A star whose radius varies periodically. The varying radius of the star causes the temperature of the star to change and consequently, the luminosity varies periodically.
Cepheid variable stars are a crucial type of standard candle. Their periodic changes in radius lead to predictable variations in temperature and, consequently, their luminosity. This allows astronomers to determine their absolute luminosity from their pulsation period, making them excellent tools for measuring cosmic distances.
When using a standard candle to find distance, show your method clearly: 1) State the known luminosity L. 2) Use the measured F. 3) Rearrange F = L/(4πd²) to find d.
Light-year (ly) — The distance that a photon of light travels through space in one year i.e. 9.46 × 10¹⁵ m.
A light-year is a unit of distance, not time, used in astronomy to measure vast cosmic scales. It represents the distance light travels in a vacuum over the course of one Earth year.
Black body — A theoretical idea developed to explain the intensity of radiation of different wavelengths, emitting 'black-body radiation'.
A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. It then emits thermal radiation in a continuous spectrum, known as black-body radiation, whose characteristics depend only on its temperature.
λmax — The wavelength corresponding to the maximum intensity of emission at any temperature.
For a black body or a star, λmax is the specific wavelength at which the emitted radiation has its highest intensity. This peak wavelength is inversely related to the object's temperature.
Wien’s displacement law — A simple relationship between λmax and the thermodynamic temperature T, given by λmax = b/T.
Wien's displacement law states that the peak wavelength of emitted radiation (λmax) by a black body is inversely proportional to its absolute temperature (T). This law allows astronomers to estimate the surface temperature of a star by observing the wavelength at which it emits most intensely.
Wien’s displacement law
This law is used to determine the surface temperature of a star from its peak emission wavelength. 'b' is Wien’s displacement constant.
Stefan–Boltzmann law — The relationship that states the luminosity L of a star is proportional to T⁴, given by L = 4πσr²T⁴ for a spherical object.
The Stefan-Boltzmann law describes the total power radiated per unit surface area of a black body, which is directly proportional to the fourth power of its absolute temperature. For a spherical star, this translates to its total luminosity being proportional to its surface area (4πr²) and the fourth power of its surface temperature.
Stefan–Boltzmann law
This formula applies for a spherical object emitting black-body radiation. 'σ' is the Stefan–Boltzmann constant, 'r' is the radius, and 'T' is the thermodynamic temperature.
To 'estimate the radius of a star', first use Wien's law to find temperature (T) from λmax, then use this T in the Stefan-Boltzmann law, rearranged for radius (r).
Not converting units consistently (e.g., km to m, nm to m, years to seconds) when using formulas like Wien's law, Stefan-Boltzmann law, or Hubble's law is a common error. Always convert to SI units before calculation.
Galaxy — A group of hundreds of millions of stars, stellar remnants, gas and dark matter, held together by gravity.
A galaxy is a massive, gravitationally bound system consisting of stars, stellar remnants, interstellar gas and dust, and dark matter. Galaxies vary widely in size and shape, from dwarf galaxies with tens of millions of stars to giant galaxies with hundreds of trillions of stars.
Redshift — When the spectral lines in the absorption spectrum are seen to have an increase in wavelength from their known values measured in a laboratory.
Redshift occurs when the electromagnetic radiation from a distant object, such as a galaxy, is observed to have an increased wavelength compared to its emitted wavelength. This shift towards the red end of the spectrum indicates that the object is moving away from the observer.
Blueshift — When the spectral lines in the absorption spectrum are seen to have a decrease in wavelength.
Blueshift is the opposite of redshift; it occurs when the electromagnetic radiation from an object is observed to have a decreased wavelength. This shift towards the blue end of the spectrum indicates that the object is moving towards the observer.
Misinterpreting the direction of motion from redshift (moving away) versus blueshift (moving towards) is a common mistake. Remember, an increase in wavelength (redshift) means moving away, while a decrease (blueshift) means moving towards.
Doppler redshift/blueshift (approximate)
This formula is approximate for objects not moving close to the speed of light. A positive Δλ indicates redshift (moving away), while a negative Δλ indicates blueshift (moving towards).
Doppler redshift/blueshift (alternative frequency form)
This is equivalent to the wavelength form for approximate speeds. A positive Δf indicates blueshift, and a negative Δf indicates redshift.
The observation that light from almost all distant galaxies exhibits redshift is crucial evidence for an expanding Universe. This redshift implies that these galaxies are moving away from us, and the further away they are, the greater their recessional speed. This systematic outward motion suggests that the space between galaxies is stretching.
When explaining how redshift provides evidence for an expanding Universe, state that light from almost all distant galaxies is redshifted, implying they are all moving away from us.
Hubble’s law — The conclusion that the radial speed of galaxies is proportional to their distance, v ≈ H₀ × d.
Hubble's law states that the radial speed (v) at which galaxies are receding from Earth is directly proportional to their distance (d) from us. This linear relationship, v ≈ H₀d, is a cornerstone of modern cosmology, providing a direct link between a galaxy's distance and its observed redshift.
Hubble constant — The constant H₀, determined from the gradient of the graph of radial speed against distance in Hubble's law.
The Hubble constant (H₀) is the proportionality constant in Hubble's law. It represents the rate of expansion of the Universe and can be determined from the gradient of a graph plotting the radial speed of galaxies against their distance. Its value is crucial for estimating the age of the Universe.
Hubble’s law
This is an approximate relationship for galaxies, indicating that their recessional speed is proportional to their distance. It is fundamental to the Big Bang theory.
Quasar — Very distant objects that have very large redshifts and a huge luminosity of up to 10⁴⁰ W.
Quasars are extremely luminous and distant active galactic nuclei, characterized by very large redshifts. Their immense luminosity, up to 10⁴⁰ W, makes them observable across vast cosmic distances, providing insights into the early Universe.
Cosmological Principle — The idea that, on a large enough scale, the Universe is both homogeneous and isotropic, meaning it would have the same general appearance from anywhere else as it appears from Earth.
The Cosmological Principle posits that the Universe is uniform and looks the same in all directions and at all locations when viewed on sufficiently large scales. This principle underpins many cosmological models, including the Big Bang theory, by simplifying the complex distribution of matter in the Universe.
Big Bang — The time when all galaxies were at the same point, marking the beginning of the Universe's expansion.
The Big Bang theory describes the Universe's origin as a singular event where all matter and energy were concentrated at a single point. This event marked the beginning of the Universe's expansion, which continues to this day, as evidenced by redshift and Hubble's law.
Hubble's law, which shows that galaxies are receding from us at speeds proportional to their distance, provides strong evidence for the Big Bang theory. If galaxies are currently moving apart, then extrapolating their motion backward in time suggests that they must have originated from a single, extremely dense point. This singular point in time and space is known as the Big Bang, marking the beginning of the Universe's expansion.
To explain how Hubble's law leads to the Big Bang theory, argue that if galaxies are moving apart now, they must have been closer in the past, and extrapolating back in time leads to a single point of origin.
Age of the Universe
This formula is derived from Hubble's law, assuming a constant expansion speed. It provides an estimate for the age of the Universe.
When calculating the age of the Universe from T = 1/H₀, ensure the Hubble constant (H₀) is in s⁻¹, not km s⁻¹ Mpc⁻¹. Consistent unit conversion is critical for accurate results.
For questions involving Hubble's law, a graph of recessional speed (v) against distance (d) will be a straight line through the origin with a gradient equal to the Hubble constant, H₀.
Definitions Bank
Luminosity
The total power (the total energy emitted per unit time) of the object.
Radiant flux intensity
The radiant power per unit area passing normally through unit area.
Standard candle
A class of stellar object which has a known luminosity and whose distance can be determined by calculation using its radiant flux intensity (apparent brightness) and luminosity.
Cepheid variable star
A star whose radius varies periodically. The varying radius of the star causes the temperature of the star to change and consequently, the luminosity varies periodically.
Light-year (ly)
The distance that a photon of light travels through space in one year i.e. 9.46 × 10¹⁵ m.
+12 more definitions
View all →Command Word Guide
| Understand | For 'Understand the term luminosity', define luminosity as the total power of radiation emitted by a star. For 'Understand the use of standard candles', explain how known luminosity and measured radiant flux intensity allow distance calculation. For 'Understand that lines from emission spectra show an increase in wavelength', explain redshift as an increase in wavelength from known values. |
| Recall and use | For 'Recall and use the inverse square law for radiant flux intensity', state F = L/(4πd²) and apply it to calculations, ensuring correct unit conversions. For 'Recall and use Wien’s displacement law', state λmax = b/T and use it to estimate temperature. For 'Recall and use Hubble’s law', state v ≈ H₀d and use it to relate galaxy speed and distance. |
| Use | For 'Use the Stefan–Boltzmann law', apply L = 4πσr²T⁴ in calculations. For 'Use Wien’s displacement law and the Stefan–Boltzmann law to estimate the radius of a star', demonstrate a two-step calculation: first find T using Wien's law, then find r using Stefan-Boltzmann law. For 'Use Δλ/λ ≈ Δf/f ≈ v/c', apply the Doppler formula to calculate relative speeds from wavelength or frequency shifts. |
| Explain | For 'Explain why redshift leads to the idea that the Universe is expanding', connect observed redshift of distant galaxies to their recession from us, implying an overall expansion of space. For 'Explain how Hubble’s law leads to the Big Bang theory', argue that if galaxies are moving apart now, they must have been closer in the past, converging to a single point of origin. |
Common Mistakes
Confusing luminosity (total power emitted) with radiant flux intensity (power per unit area observed).
Luminosity is the intrinsic total power output of a star, while radiant flux intensity is the power received per unit area at a specific distance from the star. Radiant flux intensity decreases with distance, but luminosity does not.
Incorrectly applying the inverse square law for radiant flux intensity, especially with units or distance conversions.
Always ensure all units are in SI (e.g., distance in metres) before applying F = L/(4πd²), and remember to square the distance 'd'.
Misinterpreting the direction of motion from redshift (moving away) versus blueshift (moving towards).
Redshift (increase in wavelength) indicates an object is moving away from the observer. Blueshift (decrease in wavelength) indicates an object is moving towards the observer.
+3 more
View all →This chapter covers essential practical skills for Cambridge International AS & A Level Physics, focusing on manipulation, measurement, data presentation, analysis, evaluation, and planning. It provides guidance on recording data, plotting graphs, determining relationships, and estimating uncertainties, with an emphasis on safety and identifying experimental limitations.
absolute uncertainty — The size of the range of values within which the ‘true value’ of a measurement is likely to lie.
Absolute uncertainty quantifies the precision of a measurement, indicating the maximum possible deviation from the measured value. It represents the interval within which the actual value is expected to fall.
fractional uncertainty — The ratio of the absolute uncertainty in a measurement to the measured value.
Fractional uncertainty expresses the uncertainty as a proportion of the measured quantity. It is a dimensionless quantity, useful for comparing the relative precision of different measurements.
percentage uncertainty — The ratio of the absolute uncertainty in a measurement to the measured value, expressed as a percentage.
Percentage uncertainty is the fractional uncertainty multiplied by 100%. It provides a clear, easily comparable measure of the relative precision of a measurement, indicating how significant the uncertainty is compared to the measured value.
Period of oscillation
Used to determine the period of an oscillating system by timing multiple oscillations, where 't' is the total time and 'n' is the number of oscillations.
Gradient of a straight line
Used to calculate the gradient from two widely separated points on a trend line or tangent drawn on a graph.
Y-intercept from line equation
Used to determine the y-intercept when the x-axis does not start at the origin, by substituting a point (x, y) from the line and the calculated gradient (m).
Percentage difference (average value)
Used to compare two values of a constant to determine if a relationship is supported by experimental data, by expressing their difference relative to their average.
Percentage difference (one of the values)
Used to compare two values of a constant to determine if a relationship is supported by experimental data, by expressing their difference relative to one of the values.
Absolute uncertainty from percentage uncertainty
Used to convert a percentage uncertainty into an absolute uncertainty for a given measured quantity.
Absolute uncertainty from fractional uncertainty
Used to convert a fractional uncertainty into an absolute uncertainty for a given measured quantity.
Absolute uncertainty in repeated readings
Used to estimate the absolute uncertainty in a set of repeated measurements by taking half the range of the readings.
Percentage uncertainty in a power
Used when a quantity X with percentage uncertainty P is raised to a power 'n'.
Fractional uncertainty in a product/quotient with powers
Used to combine fractional uncertainties for quantities (A, B) that are multiplied or divided and raised to powers (y, z) to find the fractional uncertainty in the resultant quantity (x).
Percentage uncertainty in a product/quotient with powers
Used to combine percentage uncertainties for quantities (A, B) that are multiplied or divided and raised to powers (y, z) to find the percentage uncertainty in the resultant quantity (x).
Absolute uncertainty in gradient (worst acceptable line)
Used to estimate the absolute uncertainty in the gradient from a graph by comparing the best-fit line to a single worst acceptable line.
Absolute uncertainty in gradient (steepest/shallowest)
Used to estimate the absolute uncertainty in the gradient from a graph by considering the range between the steepest and shallowest worst acceptable lines.
Absolute uncertainty in y-intercept (worst acceptable line)
Used to estimate the absolute uncertainty in the y-intercept from a graph by comparing the best-fit line to a single worst acceptable line.
Absolute uncertainty in y-intercept (steepest/shallowest)
Used to estimate the absolute uncertainty in the y-intercept from a graph by considering the range between the steepest and shallowest worst acceptable lines.
Accurate data recording is fundamental to practical work. Numerical data should be organised in tables with correct column headings, including quantity and unit (e.g., 'Time / s'). Raw measurements must be recorded to consistent precision, reflecting the measuring instrument's capability. Calculated values should maintain an appropriate number of significant figures, typically matching the least precise raw data or one more.
Students often record raw measurements to inconsistent degrees of precision or an incorrect number of decimal places. Remember that all raw measurements in a table column must be recorded to the same number of decimal places, reflecting the instrument's precision.
Ensure all table column headings are formatted correctly with the quantity name or symbol followed by a forward slash and the unit (e.g., 'Time / s').
Graphs are crucial for visualising relationships between variables. Axes must be labelled with quantities and units, and scales chosen so that plotted points occupy at least half the graph grid. A single, thin trend line should be drawn to show the general pattern, with an even distribution of points on either side, rather than connecting individual points. The gradient and y-intercept of this trend line can then be calculated to determine constants in a linear relationship.
Students often use inappropriate scales on graph axes, leading to plotted points not occupying at least half the graph grid. Always choose simple scales that maximise the use of the graph paper.
Students frequently draw trend lines that do not show an even distribution of points or force the line through all points. Remember to draw a single, thin line of best fit that has an even distribution of points on either side, not a 'dot-to-dot' line.
Students often incorrectly calculate the gradient or y-intercept, especially when the x-axis does not start at the origin, or by not using points from the trend line itself. Always use two points that are far apart on your line of best fit, not the plotted data points, to calculate the gradient.
Never use your plotted data points to calculate the gradient; always use two points that are far apart on your line of best fit.
Remember that the y-intercept cannot be read directly from the axis if the x-axis does not start at zero; calculate it using .
All measurements have inherent uncertainties. Absolute uncertainty defines the range of possible true values. Fractional and percentage uncertainties express this range relative to the measured value. When combining measurements, uncertainties propagate. For quantities multiplied or divided, percentage (or fractional) uncertainties are added. For quantities raised to a power, the percentage uncertainty is multiplied by that power. Graphical uncertainties in gradient and y-intercept are determined by drawing 'worst acceptable lines' that pass through the error bars of all points.
Students often fail to consider the uncertainty in measurements or identify limitations and suggest appropriate improvements for experiments. Always account for uncertainties and critically evaluate experimental procedures.
Non-linear relationships can often be transformed into linear forms using logarithms, making it easier to determine constants from graphs. For a power law relationship like , plotting against yields a straight line with gradient 'n' and y-intercept . For an exponential relationship like , plotting against gives a straight line with gradient '-b' and y-intercept . When calculating logarithms, the number of decimal places in the log value should correspond to the number of significant figures in the original value.
When calculating logarithms, the number of decimal places in the log value should equal the number of significant figures in the original value.
Effective experimental planning involves defining the problem, outlining a clear method, detailing data analysis, and considering safety. Evaluation requires identifying limitations in the experimental procedure and suggesting specific, practical improvements. This includes addressing sources of error, such as reaction time or difficulty in precise measurement, and proposing methods to mitigate them, like using light gates or set squares.
Students often do not follow general laboratory rules and specific safety precautions for practical activities. Always prioritise safety and adhere to all guidelines.
For 'planning' questions, clearly define the independent and dependent variables and explicitly state which other variables must be kept constant.
When asked to 'identify limitations and suggest improvements', be specific. Link a source of error (e.g., reaction time) to a concrete improvement (e.g., use light gates and a timer).
Always include at least one specific and relevant safety precaution in your experimental plan, explaining the risk it mitigates.
To determine the uncertainty in a gradient, draw a 'worst acceptable line' that passes through the error bars of all points and calculate the difference between its gradient and the best-fit gradient. Alternatively, calculate the gradients of the steepest and shallowest worst acceptable lines and take half their difference.
Definitions Bank
absolute uncertainty
The size of the range of values within which the ‘true value’ of a measurement is likely to lie.
fractional uncertainty
The ratio of the absolute uncertainty in a measurement to the measured value.
percentage uncertainty
The ratio of the absolute uncertainty in a measurement to the measured value, expressed as a percentage.
Command Word Guide
| Identify | State a specific limitation or improvement without elaboration. For example, 'difficulty in keeping the rule vertical'. |
| Suggest | Provide a specific and practical improvement directly addressing an identified limitation. For example, 'use a clamp stand to hold the rule vertically'. |
| Determine | Calculate a value (e.g., gradient, y-intercept, constant) showing clear working and appropriate significant figures/decimal places, often including units and uncertainties. |
| Evaluate | Assess the quality of data or experimental procedure, including identifying limitations and suggesting improvements, and drawing conclusions based on evidence and uncertainties. |
+1 more
View all →Common Mistakes
Not following general laboratory rules and specific safety precautions.
Always adhere to all safety guidelines and specific instructions for each practical activity.
Recording raw measurements to inconsistent degrees of precision or incorrect number of decimal places.
Ensure all raw measurements in a table column are recorded to the same number of decimal places, reflecting the precision of the measuring instrument.
Failing to use the correct number of significant figures for calculated values.
Calculated values should be given to the same number of significant figures as, or one more than, the least precise raw data used in the calculation.
+4 more
View all →Generated by Nexelia Academy · nexeliaacademy.com