• It’s time we moved beyond the blanket blame game that often paints boys and men as inherently problematic. The phrase “toxic masculinity” has become a catch-all for behaviors and attitudes that are, in many cases, symptoms of deeper social failures — not innate traits of maleness. Rather than pathologizing boys for being boys, we need to understand and address the systems that shape them.

    Boys are not evil

    Take the UK documentary series Adolescence. A quick glance may lead viewers to assume it’s about a “typical” white teenage boy in trouble. But a deeper look reveals the story of a young man who is anything but typical — he is the product of a broken, racially biased immigration system that failed him long before society judged him. Framing him simply through the lens of “toxic masculinity” erases that context and oversimplifies a complex, human story.

    If we’re serious about improving the lives of boys and girls alike, we must stop demonizing masculinity altogether and start promoting strong, compassionate male role models. As experts point out, fathers and mentors play a powerful role in shaping boys into healthy, empathetic men. “Boys and young men cannot be what they cannot see,” one researcher notes — and without nurturing, non-violent male figures in media, culture, and homes, boys are left adrift.

    Blaming social media or banning teens from platforms like TikTok and YouTube won’t solve the problem either. What’s needed is education — particularly media literacy that helps boys critically evaluate the content they consume, including the disturbing sexism often found in online porn.

    Let’s shift the conversation. Ditch the harmful labels. Understand the context. And start building up boys instead of breaking them down.


  • While much attention has been given to the mental health struggles of teenage girls in the UK — often rightly so — there is growing concern that boys’ mental health needs are being neglected in schools due to a focus on targeted support for girls.

    Pseudo-reality television programmes like Adolescence do nothing for the mental health of teenage boys. In fact, they may actively make things worse. These shows often promote shallow, stereotypical versions of masculinity — valuing aggression, emotional suppression, and physical appearance over vulnerability, kindness, or emotional intelligence. For boys already struggling to find their identity in a world of social media pressure and unrealistic expectations, such programming reinforces damaging ideas about what it means to “be a man.” Instead of encouraging healthy emotional expression or real connection, shows like Adolescence create a false narrative where popularity, dominance, and appearance are the keys to success. This not only isolates boys who don’t fit these narrow roles but also discourages them from seeking help when they are struggling internally.

    Pseudo-reality television programmes like Adolescence do nothing for the mental health of teenage boys.

    Recent NHS data and research show a dramatic rise in hospital admissions for girls suffering from self-harm and eating disorders. For example, eating disorders are four times more common in girls aged 11 to 16 than boys, and the number of girls hospitalised after self-harming has sharply increased.

    However, experts warn this doesn’t mean boys are doing well. Dr Elaine Lockhart of the Royal College of Psychiatrists explains that boys often express mental distress differently — through behavioural problems rather than emotional symptoms — but this is “the same two sides of the same coin.”

    This suggests that support systems in schools and healthcare may be unintentionally skewed. As resources are directed toward emotional disorders, more commonly presented by girls, boys whose distress shows up as anger, defiance, or withdrawal risk being labelled as troublemakers rather than being offered help.

    Joeli Brearley, host of To Be A Boy, argues that outdated systems in education and society are failing both girls and boys: “Something is going badly wrong.” The pressures of social media, rising inequality, and the collapse of traditional support systems have left many young people — boys included — feeling lost and unsupported.


  • As a teacher, I’ve seen my share of tense parent-school exchanges. But the recent case in Hertfordshire, England, where two parents were arrested after expressing concerns about their daughter’s school in a private WhatsApp group, is beyond belief.

    Six police to arrest 2 parents for complaining about their daughter’s school in a private Whatsapp group chat.

    Let’s pause for a moment: this didn’t happen in Myanmar or Russia. This happened in 2025 Britain. Six police officers arrived at the home of Maxie Allen and Rosalind Levine. The couple were detained for 11 hours on suspicion of harassment and malicious communications—all because they criticised their daughter’s school leadership and shared their disbelief about being banned from the premises.

    Were they issuing threats? No. Inciting violence? Not even close. They were frustrated parents asking questions and sharing opinions in a private forum.

    We are educators, not enforcers. Of course schools deserve respect—but so do parents. Especially when they are advocating for their disabled child. When routine communication is criminalised, and “disharmony” becomes a police matter, we must ask: what are we becoming?

    This is not how trust is built. If schools want engaged, supportive communities, we need to stop treating dissent like a crime.

    Respect


  • Reposting from the British Psychological Society…

    When we are interested in cause and effect relationships (which is much of the time!) we have two options: We can simply observe the world to identify associations between X and Y, or we can randomise people to different levels of X and then measure Y.

    The former – observational methods – generally provides us with only a weak basis for inferring causality at best. This approach has given us the oft-repeated (but slightly fallacious) line that ‘correlation does not imply causation’ (I would say that it can imply it, just often not much more). Of course, sometimes this is the best that we can do – if we want to understand the effects of years spent in education on mental health outcomes (for example), it would be unethical and impractical to conduct an experiment where we randomise children to stay in school for 1 or 2 more years (which option is unethical may depend on whether you’re the child or the parent…).

    But when we can randomise, that gives us remarkable inferential power. The lack of causal pathways between how we allocate participants to conditions (our randomisation procedure – hopefully, something more robust than tossing a coin!) and other factors is critical. If our randomisation mechanism influences our exposure (which by definition it should) and nothing else (ditto), and we see a difference in our outcome, then this difference must have been caused by the exposure we manipulated. But a lot remains poorly understood about exactly how and why randomisation has this magic property of allowing us to infer cause and effect. And this leads to misconceptions about what we should report in randomised studies.

    I want to dispel a couple of common but persistent myths.

    The first myth is that randomisation works because it balances confounders. Confounders exist in observational studies because the associations we observe between an exposure and an outcome are also influenced by myriad other variables – age, sex, social position and so on – via a complicated web of causal chains. In principle, if we measure all of these perfectly and statistically adjust for them then we are left with the causal effect of the exposure on the outcome. But in practice, we are never able to do this.

    When we randomise people, these influences will still be operating on the outcome, which will vary across the people randomised to our conditions. Does randomisation mean that all these different effects are balanced somehow?

    No – not least because confounders do not exist in experimental studies! This is for the simple reason that a confounder is something that affects both the exposure and the outcome, and in an experimental (i.e., randomised) study we test for a difference in our outcome between the two randomised groups. We know that randomisation influences the exposure, but we don’t directly compare levels of exposure and the outcome – we compare the randomised arms. And variables such as age, sex and social position can’t influence the randomisation mechanism (there is no causal pathway between, for example, participant age and our random number generator!).

    So, to be accurate, we need to be talking about covariates in experimental studies –factors that influence or strongly predict the outcome – not confounders. Does randomisation balance these? Well, yes, but in a more technical and subtle sense than is generally appreciated. We know (mathematically) that the chance of a difference between our randomised groups in terms of covariates and the distribution of future outcomes becomes smaller as our sample size become larger (all other things being equal, larger experiments will provide narrower confidence intervals, and more precise estimates – as well as smaller p-values, if that’s your thing!).

    In other words, a smaller study has a higher chance of imbalance, and this will be reflected in a wider confidence interval (and correspondingly larger p-value).

    This means that it doesn’t matter if our groups are in fact balanced, because we’ve been able to turn complexity into error. If our sample is small our standard error will be large, reflecting the greater likelihood of imbalance, and our statistical test will take that into account when generating a confidence interval and p-value. That is exactly why larger studies are more precise – they are more likely to be balanced. Darren Dahly, a statistician at University College Cork, gives a more complete treatment of the issue here. In his words: ‘randomisation allows us to make probabilistic statements about the likely similarity of the two randomised groups with respect to the outcome’.

    This leads to the second myth, which is that we should test for baseline differences between randomised groups. We see this all the time – usually Table 1 in an experiment – a range of demographic variables (the covariates we’ve measured – the known knowns) for each of the two groups, and then a column of p-values. Now, this is a valid approach an observational study, where we might want to test whether something is in fact a confounder by testing whether it is associated with the level of the exposure (e.g. whether or not someone drinks alcohol). But is it valid in an experimental study (e.g. if we’re randomising people to consume a dose of alcohol or not)?

    Once we start to think about what those p-values in Table 1 might be telling us, the conceptual confusion becomes clear. A randomisation procedure should be robust (i.e., immune to outside influence), and the methods section should give us the information to evaluate this. What would a statistical test add to this? As Doug Altman said in 1985: ‘performing a significance test to compare baseline variables is to assess the probability of something having occurred by chance when we know that it did occur by chance’. If our randomisation procedure is robust, by definition any difference between the groups must be due to chance. It’s not a null hypothesis we’re testing, it’s a non-hypothesis.

    Aha! But what if our randomisation process is not robust for reasons we’re not aware of? Surely we can test for that this way? But how should we do that? In particular, what alpha level should we set for declaring statistical significance? The usual 5%? If we did that, we would find baseline differences in 1 in 20 studies (more, probably, since multiple baseline variables are usually included in Table 1) even if all of them had perfectly robust randomisation. Better to invest our energies in ensuring that our randomisation mechanism is indeed robust by design (e.g., computer-generated random numbers that are generated by someone not involved in data collection).

    OK, OK – but what about deciding which of our baseline characteristics to adjust for in our analysis? It’s true that adjusting for baseline covariates that are known to influence the outcome can increase the precision of our estimates (and shrink our p-values – hurrah!). But testing for baseline differences to decide what to adjust for is again conceptually flawed. A statistically significant difference is not necessarily a meaningful difference in terms of the impact on our outcome. It depends in large part on whether the covariate does in fact strongly influence the outcome, and we aren’t testing that! Much better to select covariates based on theory or prior evidence – identify the variables we think a priori are likely to be relevant and adjust for these.

    Randomisation is extremely powerful but also surprisingly simple. Its power comes from the ability it gives us to control some of the key causal pathways operating, and to convert complexity into measurable, predictable error. So we can relax! We don’t need to worry about ‘balance’ – our sample size and the standard error will take care of that (which is why we need to power our studies properly!) – and we don’t need to have that column of p-values in Table 1 – they don’t tell us anything useful or give us any information we can usefully act on. We should all – including the editors and reviewers who ask for these things – take note!

    How should we report randomisation?

    If we accept that the key to successful randomisation is getting the process right (rather than testing whether or not it works post hoc, which is fraught with conceptual and practical issues), how do we report randomisation in a way that allows readers to evaluate its robustness?

    In medical studies – particularly clinical trials – journals expect authors to follow reporting guidelines (these exist for a vast range of study designs, many of which are relevant to psychology). A full description might look something like this:

    Randomisation was generated by an online automated algorithm (at a ratio of 1:1), which tracked counts to ensure each intervention was displayed equally. Allocation was online and participants and researchers were masked to study arm. If participants raised technical queries the researcher would be unblinded, participants seeking technical assistance received no information on the intervention in the other condition and so were not unblinded. The trial statistician had no contact with participants throughout the trial and remained blinded for the analysis. At the end of the baseline survey, participants were randomised to view one of two pages with the recommendation to either download Drink Less (intervention) or the recommendation to view the NHS alcohol advice webpage (comparator).

    This example was taken from a recent article published by Claire Garnett and colleagues (disclosure: I’m a co-author!), which tested the efficacy of an app to reduce alcohol consumption. As it was a clinical trial and published in a medical journal it had to follow the relevant reporting guidelines and describe the randomisation process fully.

    Of course, sometimes the randomisation process is robust and can be described v briefly – a computer task may have randomisation built in, so the experimenter doesn’t need to be involved at all. But that should still be described clearly. And sometimes the randomisation process does involve humans (and therefore may be potentially biased!).

    Something I’ve learned throughout my career is that we can learn a lot from how things are done in other disciplines (and also showcase what we do well in psychology). This is perhaps one example of that – there’s lots of good practice in psychology when it comes from reporting randomised studies, but we can still look to learn and improve.

    Marcus Munafò is a Professor of Biological Psychology and MRC Investigator, and Associate Pro Vice-Chancellor – Research Culture, at the University of Bristol. marcus.munafo@bristol.ac.uk


  • Did technology and change all get too fast for people and behaviour – and exams?

    As technology advances, schools and universities are increasingly challenged to keep pace, but they are struggling. Technology affects learning, teaching, assessment, and… academic honesty. While digital tools, artificial intelligence, and online platforms may offer some benefits, they also raise questions about how schools (and colleges/universities) manage human behavior—especially when it comes to dishonesty.

    ChatGPT can help students with research and writing essays and solving maths problems and while this can enhance learning, it also presents opportunities for academic dishonesty. Many institutions still lack policies that specifically address AI-generated work, leaving them to play catch-up as students find creative ways to bypass traditional assessment methods. The rapid adoption of online assessment by schools and colleges has opened the door to new forms of cheating. From sharing answers via social media as was the case with the International Baccalaureate in the May 2024 session, to using unauthorized tech during exams. Students now have easy access to tools that can undermine the integrity of assessments. School administrators are often left scrambling for solutions like lockdowns and AI detection software, but these tools are not foolproof and can lead to intrusive surveillance and unnecessary tension.

    Institutions that jumped onto the high-tech (and expensive) tech band wagon are now facing issues with cheating for which they had not prepared, and some seem to be rethinking their decision to go down the tech route.

    Read this article on the Radio New Zealand website about universities and their online assessment problems and solutions.


  • Theory of knowledge Thursday

    Thinking Thursday is our weekly slot for Theory of knowledge thinking.

    Thinking Thursday

    UN-truth? Was Orwell some kind of fortune teller? 

    UN chief António Guterres (secretary-general of the United Nations) has said this week: “Digital platforms are being misused to subvert science and spread disinformation and hate to billions of people. This clear and present global threat demands clear and co-ordinated global action.” 

    Who decides?

    Clearly he hasn’t read the TOK Guide or met an IB student who would immediately ask: Who and on what basis decides what is true and not true? And how can he (and presumably the people paying him) be certain that he is not wrong? We all saw what happened during Covid! TOK students will remember New Zealand Prime Minister Jacinda Ardern saying: “We will continue to be your single source of truth… Unless you hear it from us it is not the truth.” 

    TOK: Who decides what truth is?


    N.H.S. No Health Service

    Nurses can refuse to treat racist patients, says new UK government Health minister Wes Streeting. Nursing guidelines have been specifically updated to include ‘racism’. However, Elon Musk (him again) has been re-posting alleged tweets from Mr Streeting which appear to show him inciting violence by fantasising about punching people and throwing his political opponents under trains…. 

    TOK: On what basis can someone (such as a politician or nurse) impose their view on others (such as a patient they claim is being racist)? 


    Stoning

    UK Birmingham based Imam Sheikh Zakaullah Saleem has released a video where he rather too calmly and patiently explains the correct procedure for stoning a woman to death. However, this is only if she has cheated on her husband. So, it’s not like he’s a genuine psychopath or anything! 

    The video originates from the Green Lane Mosque in Birmingham UK which recently obtained £2.2 million in funding from the British government in the name of ‘aiding the youth within the Birmingham community’. He comes across as though he is calmly describing his favourite bread recipe. Firstly, she must be buried up to her waist. This, he finger waggingly admonishes, is to safeguard her modesty. Then, only after her dignity has been protected can the throwing of the holy stones begin. This sacred ritual ends when the “convict” dies of her injuries (presumably with her dignity still intact!). Thankfully, this punishment only applies to married women: “If they are unmarried, they will be beaten with 100 lashes in front of a big gathering.” (Direct Quote). 

    The video has now been removed from Youtube. 

    It should be noted that: Malala Yousafzai, a Pakistani activist for female education and the youngest Nobel Prize laureate, has spoken out against harsh punishments such as stoning: “We must not forget that the highest form of justice is mercy. Punishments like stoning to death are a violation of human rights and go against the very essence of Islam, which teaches compassion, mercy, and forgiveness.”

    TOK: On what basis can moral conflicts be resolved? Can the practices of one individual or culture be judged with any validity by applying the moral values of another generation or another culture?

    Universal

  • Multi-lingualism is tough on the brain

    Research into how multilingual people manage multiple languages in their minds is fascinating and sometimes surprising.

    When a multilingual person speaks, all the languages they know can actually be active at once, even if only one language is being used. These languages can sometimes “intrude” unexpectedly, showing up not just in occasional vocabulary mix-ups, but even in subtle ways like grammar or accent shifts.

    Read this interesting BBC article for more on the topic.


  • Fixing New Zealand’s School Attendance Crisis

    New Zealand has a school attendance problem. No, it’s not a problem, it’s a

    Some have muttered ‘Covid-19 lockdowns’, but no, the crisis was apparent well before the grubby little virus appeared and way ahead of the lockdowns. Some think it has a lot more to do with New Zealand’s dumbed down curriculum, its decades-old everyone-gets-a-certificate assessment system, rampant wokeism in the school system, and the way more attractive options of couch-based X-Box-esque pursuits, shopping malls or just hanging out with the bros. The reasons are less important than the consequences. A poor education means a poor life and it really is that simple.

    Elsewhere in New Zealand’s government / society, some are bleating and wringing their hands and clutching their pounamu necklaces while muttering about prison demographics, wrongly identifying ethnicity as a major causal factor. The strongest correlation is not race-prison, it’s education-prison. Among New Zealand’s prison population there is a near-100% illiteracy rate.  

    One more time, a poor education means a poor life.

    Poor education leads to low-income jobs, poor housing, poor diet… a poor life is a downward spiral and escape from it is nearly impossible.

    A good education leads to a good life. Get New Zealand’s education system right and good things will come.

    And New Zealand’s new government is taking action, talking about school attendance and truancy/absenteeism. Associate Education Minister David Seymour will tour New Zealand to talk with school communities about the new STAR (Stepped Attendance Response) system to track attendance and tackle absenteeism. Schools are being required to take charge of truancy by engaging with parents. Other Government agencies are also on notice and will be expected to engage too.

    “The basic premise of the STAR is that no child is left behind. Every student, parent, teacher and school has a role to play. Each school will develop their own STAR system to suit their community and school,” he said.

    “Almost every aspect of someone’s adult life will be defined by the education they receive as a child. If we want better social outcomes, we can’t keep ignoring the truancy crisis.”


  • Italy – poor behaviour at school now means you fail

    Italy has reinstated a “grades for conduct” policy, allowing schools to fail students based solely on their behavior. Middle and high school students who score five or less out of 10 on conduct will fail the year, regardless of their academic performance, while those scoring six will need to take a civic education test. This measure, part of an education bill approved by parliament, aims to address rising aggression toward teachers. The education minister, Giuseppe Valditara, and Prime Minister Giorgia Meloni believe the policy will restore respect and responsibility in schools.

    A civic education test – what an excellent idea. Study the book on how to behave like a good citizen. Students of course shouldn’t need a book to learn this from, after all, they have parents to teach them how to behave civilly. But some parents don’t teach their children this, so, there’s a book and then there’s a test.

    Read the full story in The Guardian here.


  • “School must go back to the basics.”

    From 2028, children in Sweden will begin school at age six, a year earlier than the current system, as part of a significant education reform. The Swedish government plans to replace the compulsory preschool year, known as förskoleklass, with an additional year in grundskola (primary school). This shift, initiated by the center-right government and supported by the far-right Sweden Democrats, emphasizes early education in reading, writing, and mathematics.

    “School must go back to the basics.”

    Education Minister Johan Pehrson believes the changes will strengthen children’s foundational skills. However, critics argue that this move undermines the benefits of play-based learning, which research suggests fosters creativity, problem-solving, and language development in young children. Concerns have been raised that the reform could jeopardize preschool teachers’ jobs, as their specialized methods might be neglected.

    Experts like Christian Eidevald and Ingrid Pramling Samuelsson criticize the reform for disregarding six-year-olds’ developmental needs and urge investment in improving education quality rather than structural changes. Conversely, some, like Johannes Westberg, support the plan, noting that it aligns Sweden’s education system more closely with the rest of Europe.

    The education minister, Johan Pehrson, said “school must go back to the basics” and added that there would be a stronger focus on early learning to read and write, as well as mathematics.

    Other proposed educational reforms include investing in emergency schools, increasing textbook usage to reduce screen time, and providing more training for teachers and preschool teachers.