High doses of ethanol stimulate neurotoxicity in limbic structures like the amygdala, causing behaviour disorders and structural disorganisation of neurons. The research examined the neuroprotective mechanism of myricetin (natural flavonoid) against anxiety-like (ethanol-induced) and depression-like (caused by ethanol) behaviours in adult male Wistar rats, with particular references to the maintenance of parvalbumin-positive (PV+) GABAergic neurons and Nissl substance in the amygdala. We carried out a randomised selection and grouping of 60 rats into six different groups (n=10): control, ethanol (5g/kg), myricetin (150mg/kg), myricetin (300mg/kg), ethanol + myricetin (150mg/kg) and ethanol + myricetin (300mg/kg). The treatment was orally administered for 21 days. Assessments used in behaviour were the Elevated Plus Maze (EPM) and the Tail Suspension Test (TST). Using histological and immunohistochemical methods, the Nissl substance and PV+ neurons of the amygdala were assessed. There was a significant increase in anxiety-like and depression-like behaviour and decreased Nissl substance and PV+ expression induced by ethanol. The myricetin (especially in the 300mg/kg dose) reversed these changes significantly, ameliorating the behavioural indices and preserving neurons. These data indicate that myricetin can alleviate ethanol-induced dysfunction of the amygdala in terms of emotion and neuronal integrity by maintaining inhibitory neuronal networks. This confirms the treatment value of myricetin on ethanol-associated emotional disorders.
Artificial Intelligence (AI) systems are increasingly integrated into medical diagnostics, promising enhanced efficiency, accuracy and predictive capabilities. However, the rapid deployment of these tools raises complex legal and ethical questions regarding liability when diagnostic errors occur. This paper critically examines the current regulatory landscape for medical AI, identifies principal liability theories applicable to diagnostic failures, and analyzes how responsibility could be apportioned among developers, healthcare providers and institutions. We argue that traditional frameworks-product liability, medical malpractice and regulatory compliance-are insufficient on their own to address AI’s unique challenges, including opaque decision processes and continuous learning. We propose a hybrid liability model incorporating strict liability for developers, shared responsibility for clinicians and mandatory transparency standards. Implications for policy, clinical practice and future research are discussed.