Daily Tech Digest - October 04, 2024

Over 80% of phishing sites now target mobile devices

M-ishing was highlighted to be the top security challenge plaguing the mobile space, both in the public sector (10%) and the private sector, and more importantly, 76% of phishing sites are now using HTTP, giving users a false sense of communication protocol. “Phishing using HTTPS is not completely new,” Krishna Vishnubhotla, vice President for product strategy at Zimperium. “Last year’s report revealed that, between 2021 and 2022, the percentage of phishing sites targeting mobile devices increased from 75% to 80%. Some of them were already using HTTPS but the focus was converting campaigns to target mobile.” “This year, we are seeing a meteoric rise in this tactic for mobile devices, which is a sign of maturing tactics on mobile, and it makes sense. The mobile form factor is conducive to deceiving the user because we rarely see the URL in the browser or the quick redirects. Moreover, we are conditioned to believe a link is secure if it has a padlock icon next to the URL in our browsers. Especially on mobile, users should look beyond the lock icon and carefully verify the website’s domain name before entering any sensitive information,” Vishnubhotla said.


How GPT-4o defends your identity against AI-generated deepfakes

OpenAI’s latest model, GPT-4o, is designed to identify and stop these growing threats. As an “autoregressive omni model, which accepts as input any combination of text, audio, image and video,” as described on its system card published on Aug. 8. OpenAI writes, “We only allow the model to use certain pre-selected voices and use an output classifier to detect if the model deviates from that.” Identifying potential deepfake multimodal content is one of the benefits of OpenAI’s design decisions that together define GPT-4o. Noteworthy is the amount of red teaming that’s been done on the model, which is among the most extensive of recent-generation AI model releases industry-wide. All models need to constantly be training on and learning from attack data to keep their edge, and that’s especially the case when it comes to keeping up with attackers’ deepfake tradecraft that is becoming indistinguishable from legitimate content. ... GANs most often consist of two neural networks. The first is a generator that produces synthetic data (images, videos or audio) and a discriminator that evaluates its realism. The generator’s goal is to improve the content’s quality to deceive the discriminator. This advanced technique creates deepfakes nearly indistinguishable from real content.


The 4 Evolutions of Your Observability Journey

Known unknowns can be used to describe the second stage. They fit because we’re looking at things we know we don’t know, but we’re trying to see how well we can develop the understanding of those unknowns, whereas if these were unknown unknowns, we wouldn’t even know where to start. If the first stage is where most of your observability tooling lies, then this is the era of service-level objectives (SLOs); this is also the stage where observability starts being phrased in a “yes, and” manner. … Having developed the ability to figure out that you can ask questions about what happened in a system in the past, you’re probably now primarily concerned with statistical questions and developing more comprehensive correlations. ... Additionally, one of the most interesting developments here is when your incident reports change: They stop becoming concerned about what happened and start becoming concerned with how unusual or surprising it was. You’re seeing first hand this stage of the observability journey in action if you’ve ever read a retrospective that said something like, “We were surprised by the behavior, so we dug in. Even though our alerts were telling us that this other thing was the problem, we investigated the surprising thing first.”


Be the change you want to see: How to show initiative in the workplace

At one point or another, all of us are probably guilty of posing a question without offering a solution. Often we may feel that others are more qualified to address an issue than we are and as long as we bring the matter to someone’s attention, then that’s as far as we need go. While this is well and good – and certainly not every scenario can be dealt with single-handedly – it can be good practice to brainstorm ideas for the problems you identify. It’s important to loop people in and utilise the expertise of others, but you should also have confidence in your ability to tackle an issue. Identifying the problem is half the battle, so why not keep going and see what you come up with? ... Some are born with confidence to spare and some are not, luckily it is a skill that can be learned over time. Working on improving your confidence level, being more vocal and presenting yourself as an expert in your field are crucial to improving your ability to show initiative, as it means you are far more likely to take the reins and lead the way. Taking the initiative or going out on a limb, in many scenarios, can be nerve-wracking and you may doubt that you are the best person for the job. 


What is RPA? A revolution in business process automation

RPA is often touted as a mechanism to bolster ROI or reduce costs, but it can also be used to improve customer experience. For example, enterprises such as airlines employ thousands of customer service agents, yet customers are still waiting in queues to have their calls fielded. A chatbot could help alleviate some of that wait. ... COOs were some of the earliest adopters of RPA. In many cases, they bought RPA and hit a wall during implementation, prompting them to ask for IT’s help (and forgiveness). Now citizen developers without technical expertise are using cloud software to implement RPA in their business units, and often the CIO has to step in and block them. Business leaders must involve IT from the outset to ensure they get the resources they require. ... Many implementations fail because design and change are poorly managed, says Sanjay Srivastava, chief digital officer of Genpact. In the rush to get something deployed, some companies overlook communication exchanges between the various bots, which can break a business process. “Before you implement, you must think about the operating model design,” Srivastava says. “You need to map out how you expect the various bots to work together.” 


Best practices for implementing threat exposure management, reducing cyber risk exposure

Threat exposure management is the evolution of traditional vulnerability management. Several trends are making it a priority for modern security teams. An increase in findings that overwhelm resource-constrained teams As the attack surface expands to cloud and applications, the volume of findings is compounded by more fragmentation. Cloud, on-prem, and AppSec vulnerabilities come from different tools. Identity misconfigurations from other tools. This leads to enormous manual work to centralize, deduplicate, and prioritize findings using a common risk methodology. Finally, all of this is happening while attackers are moving faster than ever, with recent reports showing the median time to exploit a vulnerability is less than one day! Threat exposure management is essential because it continuously identifies and prioritizes risks—such as vulnerabilities and misconfigurations—across all assets, using the risk context applicable to your organization. By integrating with existing security tools, TEM offers a comprehensive view of potential threats, empowering teams to take proactive, automated actions to mitigate risks before they can be exploited. 


Understanding VBS Enclaves, Windows’ new security technology

Microsoft recently extended its virtualization-based security model to what it calls VBS Enclaves. If you’ve looked at implementing confidential computing on Windows Server or in Azure, you’ll be familiar with the concept of enclaves, using Intel’s SGX instruction set to lock down areas of memory, using them as a trusted execution environment. ... So how do you build and use VBS Enclaves? First, you’ll need Windows 11 or Windows Server 2019 or later, with VBS enabled. You can do this from the Windows security tool, via a Group Policy, or with Intune to control it via MDM. It’s part of the Memory Integrity service, so you should really be enabling it on all supported devices to help reduce security risks, even if you don’t plan to use VBS Enclaves in your code. The best way to think of it is as a way of using encrypted storage securely. So, for example, if you’re using a database to store sensitive data, you can use code running in an enclave to process and query that data, passing results to the rest of your application. You’re encapsulating data in a secure environment with only essential access allowed. No other parts of your system have access to the decryption keys, so on-disk data stays secure.


Smart(er) Subsea Cables to Provide Early Warning System

With the U.N. estimating between 150 to 200 cable faults annually, operators need all the help they can get to maintain the global fiber network, which carries about 99% of internet traffic between continents. Additionally, $10 trillion of financial transactions flow over them per day. This growing situation has businesses desperately seeking network resiliency and clamoring for always-on-network services as their data centers and apps demand maximum uptime. The system has been beset this year with large cable outages starting in February in the Red Sea and in the spring along Western Africa, and more. ... Equipping the cable with sensors would enhance research into one of the most under-explored regions of the planet: the vast depths of the Southern Ocean, the study read. The Southern Ocean that surrounds Antarctica strongly influences other oceans and climates worldwide, according to the NSF. “Equipping the subsea telecommunications cable with sensors would help researchers better understand how deep-sea currents contribute to global climate change and improve understanding of earthquake seismology and related early warning signs for tsunamis in the earthquake-prone South Pacific region.”


Security Needs to Be Simple and Secure By Default: Google

"Google engineers are working to secure AI and to bring AI to security practitioners," said Steph Hay, senior director of Gemini + UX for cloud security at Google. "Gen AI represents the inflection point of security. It is going to transform security workflows and give the defender the advantage." ... Google also advocates for the convergence of security products and embedding AI into the entire security ecosystem. Through Mandiant, VirusTotal and the Google Cloud Platform, Google aims to drive this convergence, along with safe browsing. Google is making this convergence possible by taking a platform-centric approach through its Security Command Center, or SCC. Hemrajani shared that SCC aims to unify security categories such as cloud security posture management, Kubernetes security posture management, entitlement management and threat intelligence. Security information and event management and security orchestration, automation and response also need to converge. "SCC is bringing all of these together to be able to model the risk that you are exposed to in a holistic manner," he said. "We also realize that there is a power of convergence between cloud risk management and security operations. We need to converge them even further and bring them together to truly benefit."


The AI Revolution: How Machine Learning Changed the World in Two Years

The future of AI in business will involve continued collaboration between governments, businesses, and individuals to address challenges and maximize opportunities presented by this transformative technology. AI is likely to become increasingly integrated into software and hardware, making it easier for businesses to adopt and utilize its capabilities. Success will depend on how it is leveraged to augment human capabilities rather than replacing them, creating a future where humans and AI work together in a complementary way. Beyond automating individual tasks, AI is driving a paradigm shift towards unprecedented efficiency across entire business operations. By automating repetitive tasks, AI allows employees to focus on more strategic and creative work, leading to increased productivity and innovation. A recent McKinsey study found AI could potentially automate 45% of the activities currently performed by workers. As well as automating processes, it can also streamline operations, and minimize errors, leading to significant cost savings for businesses. For example, automating customer service with AI can reduce the need for human agents, leading to lower labor costs.



Quote for the day:

"Intelligence is the ability to change your mind when presented with accurate information that contradicts your beliefs" -- Vala Afshar

No comments:

Post a Comment