Daily Tech Digest - February 03, 2022

DeepMind says its new AI coding engine is as good as an average human programmer

A lot of progress has been made developing AI coding systems in recent years, but these systems are far from ready to just take over the work of human programmers. The code they produce is often buggy, and because the systems are usually trained on libraries of public code, they sometimes reproduce material that is copyrighted. In one study of an AI programming tool named Copilot developed by code repository GitHub, researchers found that around 40 percent of its output contained security vulnerabilities. Security analysts have even suggested that bad actors could intentionally write and share code with hidden backdoors online, which then might be used to train AI programs that would insert these errors into future programs. Challenges like these mean that AI coding systems will likely be integrated slowly into the work of programmers — starting as assistants whose suggestions are treated with suspicion before they are trusted to carry out work on their own. In other words: they have an apprenticeship to carry out. But so far, these programs are learning fast.

The evolution of a Mac trojan: UpdateAgent’s progression

UpdateAgent is uniquely characterized by its gradual upgrading of persistence techniques, a key feature that indicates this trojan will likely continue to use more sophisticated techniques in future campaigns. Like many information-stealers found on other platforms, the malware attempts to infiltrate macOS machines to steal data and it is associated with other types of malicious payloads, increasing the chances of multiple infections on a device. The trojan is likely distributed via drive-by downloads or advertisement pop-ups, which impersonate legitimate software such as video applications and support agents. This action of impersonating or bundling itself with legitimate software increases the likelihood that users are tricked into installing the malware. Once installed, UpdateAgent starts to collect system information that is then sent to its command-and-control (C2) server. Notably, the malware’s developer has periodically updated the trojan over the last year to improve upon its initial functions and add new capabilities to the trojan’s toolbox. 

AI technology is redefining surveillance

With new AI-based surveillance tools like facial recognition, businesses can go beyond security to provide their employees with an enhanced workplace experience. With 98% of IT leaders concerned about security challenges related to a hybrid workforce, AI-based surveillance can help better manage staggered and irregular work schedules by seamlessly identifying which employees are supposed to be in specific areas at certain times. This tiered access control can be taken a step further, as an operator can link facial recognition security software to systems that control automated doors or pair it with baseline authentication solutions for an added layer of protection. By the same token, these AI surveillance tools can also be used to directly benefit the workers they observe. Now, a video management system (VMS) can be trained to identify VIPs, authorized personnel, and guests of the company, creating a frictionless entry experience for those security would wish to treat with utmost care. Imagine if every time the CEO walked into the company building, he could stream through security without having to sign in. 

Explained: Prospective learning in AI

Prospective learning is important because many critical problems are novel experiences that come with little information, negligible probability, and high consequences. Unfortunately, such problems precipitate the downfall of AI systems, such as when medical diagnoses systems cannot detect underrepresented diseases in the samples used to train them. Therefore, the challenge with intelligent systems is to distinguish novel experiences, discern the potentially complex ways in which they connect to past experiences, and then act accordingly. ... Constraints, such as built-in priors and inductive biases, shrink the hypothesis space so that the intelligence needs less data and fewer solutions to resolve the current problem. These constraints are built into the system of AI and traditionally come in the form of statistical constraints and computational constraints. The former restricts the space of hypotheses to improve statistical efficiency, thereby reducing the amount of data needed to reach a particular goal. The latter seeks to improve computational efficiency by limiting the amount of space and/or time that an intelligent system has to learn and make deductions.

Silent Cyber Risks For Insurers: Can AI Applications Help?

Silent cyber is the term given to a situation in which cyber coverage is implied to be provided to an insured, without the knowledge of the insurer providing the coverage. In simpler words, a silent cyber situation strikes when a courts’ findings are in favour of a policy owner because the policy does not clearly exclude cyber coverage. The incidents of silent cyber surged during the Covid-19 period as ransomware incidents also proliferated. This uptrend in cyber-related insurance claims also raised the chances of cyber risks and exposure onto the insurers as they continue to pay out claims. This has also led to many insurers denying coverage in many cases, even in such situations where a policy did not explicitly cite any cyber coverage. It has also led to an increase in the number of premiums for cyber insurance. To balance out this issue, regulators have issued guidelines to help manage such risks for insurance firms, but such initiatives have not been enough. Surveys have revealed that there is an inherent need for the use of advanced technologies that can bridge this glaring lack of explicit language that can refer to the cyberattacks in insurance policies.

The downside of machine learning in health care

Coming from computers, the product of machine-learning algorithms offers “the sheen of objectivity,” according to Ghassemi. But that can be deceptive and dangerous, because it’s harder to ferret out the faulty data supplied en masse to a computer than it is to discount the recommendations of a single possibly inept (and maybe even racist) doctor. “The problem is not machine learning itself,” she insists. “It’s people. Human caregivers generate bad data sometimes because they are not perfect.” Nevertheless, she still believes that machine learning can offer benefits in health care in terms of more efficient and fairer recommendations and practices. One key to realizing the promise of machine learning in health care is to improve the quality of data, which is no easy task. “Imagine if we could take data from doctors that have the best performance and share that with other doctors that have less training and experience,” Ghassemi says. “We really need to collect this data and audit it.” The challenge here is that the collection of data is not incentivized or rewarded, she notes.

Unpatched Security Bugs in Medical Wearables Allow Patient Tracking, Data Theft

Besides just the device, Kaspersky reported finding concerning flaws in the most common wearable device platform, Qualcomm Snapdragon Wearable. The platform has been riddled with bugs, the team added, bringing the total number of vulnerabilities found in the platform since it was launched in 2020 to 400 — many still unpatched. This makes for an enormous, vulnerable attack surface across the healthcare sector, while attacks are getting more frequent, brazen and destructive. It’s up to hospitals and medical service providers to build telehealth systems with security in mind, Nate Warfield, CTO of Prevailion wrote in Threatpost last summer. He called on the private sector to lend a hand to shore up critical healthcare infrastructure, and lauded groups like CTI League, COVID-19 Cyber Threat Coalition, formed at the beginning of the pandemic, to share threat intelligence against a rising threat of attack. “Cyber-threats to healthcare won’t slow down, even after the pandemic is over,” Warfield explained. “Hospitals need to take more aggressive action to fortify themselves against these attacks…They also need to increase their investments in cybersecurity.”

New Open-Source Multi-Cloud Asset to build SaaS

Regulated industries such as financial institutions, insurance, healthcare and more, all want the advantages of a hybrid cloud, but need assurance they can protect their assets and maintain compliance with industry and regulatory requirements. The key to hosting regulated workloads in the cloud is to eliminate and mitigate the risks that might be standing in the way of progress. In regulated industries, critical risks fall into the general categories of compliance, cybersecurity, governance, business continuity and resilience. The DevSecOps approach of our CI/CD pipelines are based an IBM DevSecOps reference architecture, helping to address some of the risks faced by regulated industries. The CI/CD pipelines include steps to collect and upload deployment log files, artifacts, and evidence to a secure evidence locker. In addition, a toolchain integration to IBM Security and Compliance Center verifies the security and compliance posture of the toolchain by identifying the location of the evidence locker, and the presence of the evidence information.

Decentralized technology will end the Web3 privacy conundrum

It’s not just willful negligence, of course. There is a good technical reason that web applications today are unable to execute on existing blockchain architectures. Because all participants are currently forced to re-execute all transactions in order to verify the state of their ledger, every service on a blockchain is effectively time-sharing a single, finite, global compute resource. Another reason that privacy has not been prioritized is that it’s very hard to guarantee. Historically, privacy tools have been slow and inefficient, and making them more scalable is hard work. But just because privacy is hard to implement doesn’t mean it shouldn’t be a priority. The first step is to make privacy simpler for the user. Achieving privacy in crypto should not require clunky workarounds, shady tools or a deep expertise of complex cryptography. Blockchain networks, including smart contract platforms, should support optional privacy that works as easily as clicking a button. Blockchain technology is poised to answer these calls with security measures that guarantee utmost privacy with social accountability.

Four tips to increase executive buy-in to disaster recovery

One of the core issues when it comes to communicating technology concerns to a business audience is the use of appropriate vocabulary and the ability to communicate context. Tech-rich terminology will immediately switch off those that don’t understand it and ambiguous references that don’t adequately explain the impact to business or the everyday prevalence of security threats, will fall on deaf ears. In terms of disaster recovery, the word “disaster”, for example, is often associated with low probability events such as a widespread outage due to an earthquake, flood or act of terrorism, and fails to adequately communicate the prevalence of data loss events. In reality, however, most downtime is caused by mundane, everyday events such as hardware failure, human error, severe weather or power outages. This has become even more the case since the pandemic has driven widespread adoption of hybrid and home working. As employees work remotely in greater frequency, employee-based incidents are increasingly on the rise, wreaking havoc on IT environments.

Quote for the day:

"If the owner of the land leads you, you cannot get lost." -- Ugandan Proverb

No comments:

Post a Comment