Connect with us

Tech

Asset managers on alert after ‘WhatsApp’ crackdown on banks

Published

on

  • Demand for software to record, archive messaging on the rise.
  • Banks pay hundreds of millions of dollars in regulatory fines.
  • Continued remote working underscores risk of compliance missteps.

LONDON: Asset managers are tightening controls on personal communication tools such as WhatsApp as they join banks in trying to ensure employees play by the rules when they do business with clients remotely.

Regulators had already begun to clamp down on the use of unauthorised messaging tools to discuss potentially market-moving matters, but the issue gathered urgency when the pandemic forced more finance staff to work from home in 2020.

Most of the companies caught in communications and record-keeping probes by the U.S. Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have been banks – which have collectively been fined or have set aside more than $1 billion to cover regulatory penalties.

But fund firms with billions of dollars in assets are also increasing their scrutiny of how staff and clients interact.

“It is the hottest topic in the industry right now,” said one deals banker, who declined to be named in keeping with his employer’s rules on speaking to the media.

Reuters reported last year the SEC was looking into whether Wall Street banks had adequately documented employees’ work-related communications, and JPMorgan was fined $200 million in December for “widespread” failures.

German asset manager DWS said last month it had set aside 12 million euros ($12 million) to cover potential U.S. fines linked to investigations into its employees’ use of unapproved devices and record-keeping requirements, joining a host of banks making similar provisions, including Bank of America, Morgan Stanley and Credit Suisse.

Sources at several other investment firms – described in the financial community as the ‘buy-side’ – including Amundi, AXA Investment Management, BNP Paribas Asset Management and JPMorgan Asset Management, told Reuters they have deployed tools to keep all communications between staff and clients compliant.

Spokespeople for the SEC and CFTC declined to comment on whether their investigations could extend beyond the banks, but industry sources expect authorities to cast their nets wider across the finance industry and even into government.

Last month Britain’s Information Commissioner’s Office (ICO), the country’s top data protection watchdog, called for a review of the use of WhatsApp, private emails and other messaging apps by government officials after an investigation found “inadequate data security” during the pandemic.

Good business for some

Regulations governing financial institutions have progressively been tightened since the global financial crisis of 2007-9 and companies have long recorded staff communications to and from office phones.

This practice is designed to deter and uncover infringements such as insider trading and “front-running,” or trading on information that is not yet public, as well as ensure best practices in terms of treatment of customers.

But with thousands of finance workers and their clientele still working remotely after decamping from company offices at the start of the pandemic, some sensitive conversations that should be recorded remain at risk of being inadvertently held over informal or unauthorised channels.

Brad Levy, CEO of business messaging software firm Symphony, said concerns about managing that risk had driven a surge in interest for software upgrades that make conversations on popular messaging tools including Meta Platforms’ WhatsApp recordable.

“Most believe the breadth of these investigations will go wider as they go deeper,” Levy said.

“Many market participants have retention and surveillance requirements so are likely to take a view, including being more proactive without being a direct target.”

He said Symphony’s user base has more than doubled since the pandemic to 600,000, spanning 1,000 financial institutions including JPMorgan and Goldman Sachs.

Symphony peer Movius also said its business lines specialising in making WhatsApp and other tools recordable have more than doubled in size in the space of a year, with sales to asset managers a growing component.

“Many on the buy-side have recognised that you can’t just rely on SMS and voice calls,” said Movius Chief Executive Ananth Siva, adding that the company was also seeking to work with other highly-regulated industries including healthcare.

Movius software integrates third-party communications tools such as email, Zoom, Microsoft Teams and WhatsApp into one system that can be recorded and archived as required, he said.

Amundi, AXA IM, BNPP AM and JPMorgan Asset Management all confirmed they had adopted Symphony software but declined to comment on the full breadth of services they used or when these had been rolled out.

Amundi and AXA IM both confirmed they used Symphony services for team communications, while AXA IM also said they used it for market information.

Amundi, BNPP AM and JP Morgan AM declined to comment on whether they thought regulators would seek to investigate record keeping at asset managers after enforcement actions against the banks were completed.

A spokesperson for BNPP AM said it had banned the use of WhatsApp for client communications due to compliance, legal and risk considerations including General Data Protection Regulation (GDPR).

Tech

Humanity at risk: Musk, others ring alarm bells over hasty ‘giant AI experiments’

Published

on

By

Hundreds of celebrated artificial intelligence (AI) researchers including Tesla owner Elon Musk have undersigned an open letter recommending AI labs to revisit gigantic AI systems, ringing alarm bells over the “profound risks” these bots pose to society and humanity.

According to the letter, published by the nonprofit Future of Life Institute, AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the open letter.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

AI engineers around the world want to make sure that these powerful AI systems should be allowed to take logical time for the researchers to make sure they were safe.

Among the signatories of the letter are author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and several well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque. 

The letter was mainly prompted by the release of GPT-4 from the San Francisco firm OpenAI.

The company says its latest model is much more powerful than the previous version, which was used to power ChatGPT, a bot capable of generating tracts of text from the briefest of prompts.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” says the letter. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Musk was an initial investor in OpenAI, spent years on its board, and his car firm Tesla develops AI systems to help power its self-driving technology, among other applications.

The letter, hosted by the Musk-funded Future of Life Institute, was signed by prominent critics as well as competitors of OpenAI like Stability AI chief Emad Mostaque.

The letter quoted from a blog written by OpenAI founder Sam Altman, who suggested that “at some point, it may be important to get independent review before starting to train future systems”.

“We agree. That point is now,” the authors of the open letter wrote.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

They called for governments to step in and impose a moratorium if companies failed to agree.

The six months should be used to develop safety protocols, AI governance systems, and refocus research on ensuring AI systems are more accurate, safe, “trustworthy and loyal”.

The letter did not detail the dangers revealed by GPT-4.

Continue Reading

Tech

AI can potentially diagnose and treat cancers: study

Published

on

By

Apart from artificial intelligence (AI) writing several essays, and passing bar exams, it is also capable of treating and detecting different types of cancers, according to a news report from Scripps News.

Researchers from the University of Toronto and Insilico Medicine used an AI database called AlphaFold and developed in just 30 days a medicine that could treat hepatocellular carcinoma (HCC), or liver cancer

The study was published in the journal Chemical Science. In it, the AI system discovered the previously unexplored paths to develop a cancer medicine to treat HCC and developed a novel hit molecule that could bind the target.

In the second round, they discovered an even more powerful hit molecule, however, the researchers maintained that the drug still needs to undergo clinical testing before it can be launched to treat cancer in hospitals.

Feng Ren, the study’s co-author, said in a statement that the AI-powered AlphaFold broke new scientific ground in predicting the structure of human proteins.

“At Insilico Medicine, we saw that as an incredible opportunity to take these structures and apply them to our end-to-end AI platform in order to generate novel therapeutics to tackle diseases with high unmet needs. This paper is an important first step in that direction,” he also added.

In another study journal JAMA Network Open, the experts from the BC Cancer and University of British Columbia showed how accurately AI predicted the survivor rates of patients with cancers.

It analysed the survival rates of more than 47,000 patients from six months to three and five years.

Dr John-Jose Nunez, lead author of the research study said in a statement: “The AI essentially reads the consultation document similar to how a human would read it.”

“These documents have many details like the age of the patient, the type of cancer, underlying health conditions, past substance use, and family histories. The AI brings all of this together to paint a more complete picture of patient outcomes”, he underscored.

Nevertheless, researchers are currently at the nascent stage of exploring the full potential of AI in health care and the treatment and diagnosis of diseases, however, the studies suggest that it can potentially be promising.

“Our hope is that a tool like this could be used to personalise and optimise the care a patient receives right away, giving them the best outcome possible,” Nunez said. 

Continue Reading

Tech

Numerous mobile applications at risk of cyberattacks: study

Published

on

By

Amid increasing digitisation in the technological world, the risks of cyberattacks are increasing with every passing day. Similar concerns were expressed by a study by Promon which found out that around 81% of mobile apps can be targeted through cyberattacks, the digital information world reported.

The findings have revealed the insecurity looming over the personal data of people which can be targeted by cyber attackers.

In this study, over 350 mobile applications were analysed which revealed that every app is vulnerable to any attack.

The mobiles were those that had insecure data storage and inadequate security encryption measures. These weaknesses can be breached by the attackers to gain illegal access to the personal data of the users.

Within these devices, 84% of them were not smart to detect any malicious attempt or any potential data or security breach.

In this study, “repackaging” — a major hack — was also highlighted which allows the attackers to gain access and run the extra process which the mobile is not otherwise designed to do. This way hackers can have complete control and information over their target. Only 15% of the devices have ensured security against these measures.

As the alarming findings have raised concerns over security, it is suggested that developers should carry out security checks and identify vulnerabilities. Users are also suggested to refrain from downloading data or applications from unknown sources especially those that require sensitive access to personal information.

Recently, there has been a cyberattack on the world’s renowned automobile company.

The report came forward at a high time when there is a record number of mobile users worldwide with millions of people dependent on mobile phones for their information, banking, healthcare transport and so on. This phenomenon has activated criminals to grab the personal data of people and target them for their illegal aims.

It is highly suggested that mobile application developers ensure routine safety and security checks and be proactive in determining any gap in security. 

If is not done in a timely manner then the cost of such would be very high for individuals and overall businesses — highly dependent upon digital applications. It will cost not only information and personal data but can also result in hefty financial losses. 

Continue Reading

Trending