News National Automated English test for visa applicants fails native speakers
Updated:

Automated English test for visa applicants fails native speakers

Louise Kennedy is an equine vet and has two degrees Photo: AAP
Share
Twitter Facebook Reddit Pinterest Email

An Irish equine vet with two degrees has been unable to convince a machine she can speak English well enough to stay in Australia.

And while the company that administers the test for the Immigration Department has stood by it processes, it has since offered Louise Kennedy the chance to resit the test.

Dr Kennedy has been working in Australia on a skilled worker visa for the past two years and hoped to get permanent residency. But the English proficiency test which uses voice recognition technology to test speaking ability found she is not competent enough in her own language to satisfy immigration regulations.

She needed a score of 79 out of 90 in the speaking section of the test, but fell short with 74 points.

Dr Kennedy has blamed the automated marking system by Pearson Test of English (PTE) Academic, one of five test providers the Immigration Department uses to assess English competency for visa purposes.

“There’s obviously a flaw in their computer software, when a person with perfect oral fluency cannot get enough points,” she told AAP.

An Australian woman, who needed to sit the test for nursing, has also come forward to say she failed.

Sasha Hampson, head of English for Pearson Asia Pacific, told The New Daily the government only required 65 – not 79 – points, and that Dr Kennedy must have required additional points for unrelated reasons.

“It’s a points-based system, dependent on their age, their language proficiency, their work experience, for example.”

Ms Hampson said that 74 was well within the range for a native English speaker.

How does a computer mark speech?

Pearson’s automated scoring system was made using algorithms trained through 10,000 candidates from more than 120 different languages.

“For the speaking component, nearly 400,000 responses were collected and marked by human rates.”

Ms Hampson said the automated system was more accurate, because it could not be influenced by human bias.

“Automated scoring has the benefit of removing this effect as it is indifferent to a test taker’s appearance and personality, and is not affected by human errors due to examiner tiredness, mood etc.

“Such impartiality means that test takers can be confident that they are being judged solely on their language performance and that they would have earned the same score if the test had been administered in Beijing, Singapore or Delhi.”

In the test, applicants are asked to read text on a screen, describe an image they are prompted with, and listen to a lecture before summarising its content. Pearson doesn’t offer a pass or a fail, simply a score.

A spokesperson for the Immigration Department said it was not involved in the administration or operation of the tests.

The department has not been made aware of other complaints regarding voice recognition technology, the spokesperson said.

Former newsreader also fails test

An Australian citizen with an English degree who once worked as a newsreader has also revealed she failed the test.

The Melbourne woman, who was born in Singapore and speaks three languages, is now a mature-age student completing a masters degree in nursing and has lived in Australia for 20 years.

She took the Pearson PTE last month so she could satisfy registration requirements to work as a nurse when her final year of study is complete.

While she got the highest possible score of 90 for listening and writing, she managed only 62 for speaking – eight points below what the Australian Health Practitioner Regulation Agency will accept.

The nursing student doesn’t want to be named fearing it could damage her employment prospects, but she blames the artificial intelligence system used to assess candidates.

“I have a degree in English from the National University of Singapore. I’ve studied phonetics and semantics and English literature and the history of English. And after that I was a radio announcer. I actually read the news on radio in Singapore – in English,” she told AAP.

“I’ve been speaking English since I was a toddler.

“I’m just not sure that voice recognition software is as sophisticated as it ought to be to detect nuances in speech, accent, and assess appropriately – as a human being would.”

She has asked for a review.

A snippet from a 2011 Pearson PTE tips guide provides an example of what could be asked in the speaking component of the test. Photo: Pearson PET

-with AAP

Comments
View Comments