Daily Tech Digest - February 03, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


The CISO’s role in advancing innovation in cybersecurity

CISOs must know the risks of adopting untested solutions, keeping in mind their organization’s priorities and learning how to evaluate new tools and technologies. “We also ensure both parties have clear, shared goals from the start, so we avoid misunderstandings and set everyone up for success,” Maor tells CSO. ... It’s a golden era of cybersecurity innovation driven by emerging cybersecurity threats, but it’s a tale of two companies, according to Perlroth. AI is attracting significant amounts of funding while it’s harder for many other types of startups. Cybersecurity companies continue to get a lot of interest from venture capital (VC) firms, although she’s seeing founders themselves eschewing big general funds in favor of funds and investors with industry knowledge. “Startup founders frequently want to work with venture capitalists who have some kind of specific value add or cyber expertise,” says Perlroth. In this environment, there’s more potential for CISOs to be involved and those with an appetite for the business side of cyber innovation can look for opportunities to advise and invest in new businesses. Cyber-focused venture capital (VC) firms often engage CISOs to participate in advisory panels and assist with due diligence when vetting startups, according to Haleliuk. 


The risks of supply chain cyberattacks on your organisation

Organisations need to ensure they take steps to prevent the risk of key suppliers falling victim to cyberattacks. A good starting point is to work out just where they are most exposed, says Lorri Janssen-Anessi, director of external cyber assessments at BlueVoyant. “Understand your external attack surface and third-party integrations to ensure there are no vulnerabilities,” she urges. “Consider segmentation of critical systems and minimise the blast radius of a breach. Identify the critical vendors or suppliers and ensure those important digital relationships have stricter security practices in place.” Bob McCarter, CTO at NAVEX, believes there needs to be a stronger emphasis on cybersecurity when selecting and reviewing suppliers. “Suppliers need to have essential security controls including multi-factor authentication, phishing education and training, and a Zero Trust framework,” he says. “To avoid long-term financial loss, they must also adhere to relevant cybersecurity regulations and industry standards.” But it’s also important to regularly perform risk assessments, even once the relationship is established, says Janssen-Anessi. “The supply chain ecosystem is not static,” she warns. “Networks and systems are constantly changing to ensure usability. To stay ahead of vulnerabilities or risks that may pop up, it is important to continuously monitor these suppliers.”


Deepseek's AI model proves easy to jailbreak - and worse

On Thursday, Unit 42, a cybersecurity research team at Palo Alto Networks, published results on three jailbreaking methods it employed against several distilled versions of DeepSeek's V3 and R1 models. ... "Our research findings show that these jailbreak methods can elicit explicit guidance for malicious activities," the report states. "These activities include keylogger creation, data exfiltration, and even instructions for incendiary devices, demonstrating the tangible security risks posed by this emerging class of attack." Researchers were able to prompt DeepSeek for guidance on how to steal and transfer sensitive data, bypass security, write "highly convincing" spear-phishing emails, conduct "sophisticated" social engineering attacks, and make a Molotov cocktail. They were also able to manipulate the models into creating malware. ... "While information on creating Molotov cocktails and keyloggers is readily available online, LLMs with insufficient safety restrictions could lower the barrier to entry for malicious actors by compiling and presenting easily usable and actionable output," the paper adds. ... "By circumventing standard restrictions, jailbreaks expose how much oversight AI providers maintain over their own systems, revealing not only security vulnerabilities but also potential evidence of cross-model influence in AI training pipelines," it continues.


10 skills and traits of successful digital leaders

An important skill for CIOs is strategic thinking, which means adopting a “why” mindset, notesGill Haus, CIO of consumer and community banking at JPMorgan Chase. “I ask questions all the time — even on subjects I think I’m most knowledgeable about,” Haus says. “When others see their leader asking questions, even in the company of more senior leaders, it creates a welcoming atmosphere that encourages everyone to feel safe doing the same. ... Effective leaders have a clear vision of what technology can do for their organization as well as a solid understanding of it, agrees Stephanie Woerner, director and principal research scientist at the MIT’s Center for Information Systems Research (CISR). “They think about the new things they can do with technology, different ways of getting work done or engaging with customers, and how technology enables that.” ... Being able to translate complex technical concepts into clear business value while also maintaining realistic implementation timelines is another important skill. Tech leaders are up to their eyeballs in data, systems, and processes, but all users want is that a product works. A strong digital leader should constantly ask themselves how they can make something easier for their customers. 


Prompt Injection for Large Language Models

Many businesses put all of their secrets into the system prompt, and if you're able to steal that prompt, you have all of their secrets. Some of the companies are a bit more clever, and they put their data into files that are then put into the context or referenced by the large language model. In these cases, you can just ask the model to provide you links to download the documents it knows about. Sometimes there are interesting URLs pointing to internal documents, such as Jira, Confluence, and the like. You can learn about the business and its data that it has available. That can be really bad for the business. Another thing you might want to do with these prompt injections is to gain personal advantages. Imagine a huge company, and they have a big HR department, they receive hundreds of job applications every day, so they use an AI based tool to evaluate which candidates are a fit for the open position. ... Another approach to make your models less sensitive to prompt injection and prompt stealing is to fine-tune them. Fine-tuning basically means you take a large language model that has been trained by OpenAI, Meta, or some other vendor, and you retrain it with additional data to make it more suitable for your use case.


The hidden dangers of a toxic cybersecurity workplace

Certain roles in cybersecurity are more vulnerable to toxic environments due to the nature of their responsibilities and visibility within the organization. SOC analysts, for instance, are often on the frontlines, dealing with high-pressure situations like incident response and threat mitigation. The expectation to always be “on” can lead to burnout, especially in a culture that prioritizes output over well-being. Similarly, CISOs face unique challenges as they balance technical, strategic, and political pressures. They’re often caught between managing expectations from the C-suite and addressing operational realities. CISO burnout is very real, driven in part by the immense liability and scrutiny associated with the role. The constant pressure, combined with the growing complexity of threats, leads many CISOs to leave their positions, with some even vowing, “never again will I do this job.” This trend is tragic, as organizations lose experienced leaders who play a critical role in shaping cybersecurity strategies. ... Leaders play a crucial role in fostering a positive culture and must take proactive steps to address toxicity. They should prioritize open communication and actively solicit feedback from their teams on a regular basis. Anonymous surveys, one-on-one meetings, and team discussions can help identify pain points. 


The Cultural Backlash Against Generative AI

Part of the problem is that generative AI really can’t effectively do everything the hype claims. An LLM can’t be reliably used to answer questions, because it’s not a “facts machine”. It’s a “probable next word in a sentence machine”. But we’re seeing promises of all kinds that ignore these limitations, and tech companies are forcing generative AI features into every kind of software you can think of. People hated Microsoft’s Clippy because it wasn’t any good and they didn’t want to have it shoved down their throats — and one might say they’re doing the same basic thing with an improved version, and we can see that some people still understandably resent it. When someone goes to an LLM today and asks for the price of ingredients in a recipe at their local grocery store right now, there’s absolutely no chance that model can answer that correctly, reliably. That is not within its capabilities, because the true data about those prices is not available to the model. The model might accidentally guess that a bag of carrots is $1.99 at Publix, but it’s just that, an accident. In the future, with chaining models together in agentic forms, there’s a chance we could develop a narrow model to do this kind of thing correctly, but right now it’s absolutely bogus. But people are asking LLMs these questions today! And when they get to the store, they’re very disappointed about being lied to by a technology that they thought was a magic answer box.


Developers: The Last Line of Defense Against AI Risks

Considering security early in the software development lifecycle has not traditionally been a standard practice amongst developers. Of course, this oversight is a goldmine for cybercriminals who exploit ML models to inject harmful malware into software. The lack of security training for developers makes the issue worse, particularly when AI-generated code, trained on potentially insecure open source data, is not adequately screened for vulnerabilities. Unfortunately, once AI/ML models integrate such code, the potential for undetected exploits only increases. Therefore, developers must also function as security champions, and DevOps and Security can no longer be considered separate functions. ... As AI continues to be implemented at scale by different teams, the need for advanced security in ML models is key. Enter the “Shift Left” approach, which advocates for integrating security measures early in the software lifecycle to get ahead and prevent as many future vulnerabilities as possible and ensure comprehensive security throughout the development process. This strategy is critical in AI/ML development, before they’re even deployed, to ensure the security and compliance of code and models, which often come from external sources and sometimes cannot be trusted.


How Leaders Can Leverage AI For Data Management And Decision-Making

“The real challenge isn’t just the cost of storing data—it’s making sense of it,” explains Nilo Rahmani, CEO of Thoras.ai. “An estimated 80% of incident resolution time is spent simply identifying the root cause, which is a costly inefficiency that AI can help solve.” AI-powered analytics can detect patterns, predict failures, and automate troubleshooting, reducing downtime and improving reliability. By leveraging AI, companies can streamline their data operations while increasing speed and accuracy in decision-making. Effective data management extends beyond simple storage—it requires real-time intelligence to ensure organizations are using the right data at the right time. AI plays a critical role in distinguishing meaningful data from noise, helping companies focus on insights that drive growth. ... AI is poised to revolutionize data management, but success will depend on how well organizations integrate it into their existing frameworks. Companies that embrace AI-driven automation, predictive analytics, and proactive infrastructure management will not only reduce costs but also gain a competitive edge by making faster, smarter decisions. Leaders must shift their focus from simply collecting and storing data to using it intelligently. 


Ramping Up AI Adoption in Local Government

One of the biggest barriers stopping local authorities from embracing AI is the lack of knowledge and misunderstanding around the technology. For many years the fear of the unknown has caused confusion, with numerous news articles claiming modern technology poses a threat to humanity. This could not be further from the truth. ... One key area that is missing from the AI Opportunities Actions Plan is managing and upskilling workers. People are core to every transformation, even ones that are digitally focused. To truly unlock the power of AI, employees need to be supported and trained in a judgement free space, allowing them to disclose any concerns or areas of support. After years of fear-mongering some employees may be more hesitant to engage with an AI transformation. Therefore, it’s up to leaders to adopt a top-down approach to promoting and embracing AI in the workplace. To begin, a skills audit should be conducted, assessing the existing knowledge and experiences with AI-related skills. Based on this, customised training plans can be developed to ensure everyone within the organisation feels supported and confident. It’s important for leaders to emphasise that a digital transformation doesn’t mean job cuts, but rather, takes away the time-consuming jobs and allows staff to focus on higher value, creative and strategic work.

Daily Tech Digest - February 01, 2025


Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry


5 reasons the enterprise data center will never die

Cloud repatriation — enterprises pulling applications back from the cloud to the data center — remains a popular option for a variety of reasons. According to a June 2024 IDC survey, about 80% of 2,250 IT decision-maker respondents “expected to see some level of repatriation of compute and storage resources in the next 12 months.” IDC adds that the six-month period between September 2023 and March 2024 saw increased levels of repatriation plans “across both compute and storage resources for AI lifecycle, business apps, infrastructure, and database workloads.” ... According to Forrester’s 2023 Infrastructure Cloud Survey, 79% of roughly 1,300 enterprise cloud decision-makers said their firms are implementing internal private clouds, which will use virtualization and private cloud management. Nearly a third (31%) of respondents said they are building internal private clouds using hybrid cloud management solutions such as software-defined storage and API-consistent hardware to make the private cloud more like the public cloud, Forrester adds. ... “Edge is a crucial technology infrastructure that extends and innovates on the capabilities found in core datacenters, whether enterprise- or service-provider-oriented,” says IDC. The rise of edge computing shatters the binary “cloud-or-not-cloud” way of thinking about data centers and ushers in an “everything everywhere all at once” distributed model


How to Understand and Manage Cloud Costs with a Data-Driven Strategy

Understanding your cloud spend starts with getting serious about data. If your cloud usage grew organically across teams over time, you're probably staring at a bill that feels more like a puzzle than a clear financial picture. You know you're paying too much, and you have an idea of where the spending is happening across compute, storage, and networking, but you are not sure which teams are overspending, which applications are being overprovisioned, and so on. Multicloud environments add even another layer of complexity to data visibility. ... With a holistic view of your data established, the next step is augmenting tools to gain a deeper understanding of your spending and application performance. To achieve this, consider employing a surgical approach by implementing specialized cost management and performance monitoring tools that target specific areas of your IT infrastructure. For example, granular financial analytics can help you identify and eliminate unnecessary expenses with precision. Real-time visibility tools provide immediate insights into cost anomalies and performance issues, allowing for prompt corrective actions. Governance features ensure that spending aligns with budgetary constraints and compliance requirements, while integration capabilities with existing systems facilitate seamless data consolidation and analysis across different platforms. 


Top cybersecurity priorities for CFOs

CFOs need to be aware of the rising threats of cyber extortion, says Charles Soranno, a managing director at global consulting firm Protiviti. “Cyber extortion is a form of cybercrime where attackers compromise an organization’s systems, data or networks and demand a ransom to return to normal and prevent further damage,” he says. Beyond a ransomware attack, where data is encrypted and held hostage until the ransom is paid, cyber extortion can involve other evolving threats and tactics, Soranno says. “CFOs are increasingly concerned about how these cyber extortion schemes impact lost revenue, regulatory fines [and] potential payments to bad actors,” he says. ... “In collaboration with other organizational leaders, CFOs must assess the risks posed by these external partners to identify vulnerabilities and implement a proactive mitigation and response plan to safeguard from potential threats and issues.” While a deep knowledge of the entire supply chain’s cybersecurity posture might seem like a luxury for some organizations, the increasing interconnectedness of partner relationships is making third-party cybersecurity risk profiles more of a necessity, Krull says. “The reliance on third-party vendors and cloud services has grown exponentially, increasing the potential for supply chain attacks,” says Dan Lohrmann, field CISO at digital services provider Presidio. 


GDPR authorities accused of ‘inactivity’

The idea that the GDPR has brought about a shift towards a serious approach to data protection has largely proven to be wishful thinking, according to a statement from noyb. “European data protection authorities have all the necessary means to adequately sanction GDPR violations and issue fines that would prevent similar violations in the future,” Schrems says. “Instead, they frequently drag out the negotiations for years — only to decide against the complainant’s interests all too often.” ... “Somehow it’s only data protection authorities that can’t be motivated to actually enforce the law they’re entrusted with,” criticizes Schrems. “In every other area, breaches of the law regularly result in monetary fines and sanctions.” Data protection authorities often act in the interests of companies rather than the data subjects, the activist suspects. It is precisely fines that motivate companies to comply with the law, reports the association, citing its own survey. Two-thirds of respondents stated that decisions by the data protection authority that affect their own company and involve a fine lead to greater compliance. Six out of ten respondents also admitted that even fines imposed on other organizations have an impact on their own company. 


The three tech tools that will take the heat off HR teams in 2025

As for the employee review process, a content services platform enables HR employees to customise processes, routing approvals to the right managers, department heads, and people ops. This means that employee review processes can be expedited thanks to customisable forms, with easier goal setting, identification of upskilling opportunities, and career progression. When paperwork and contracts are uniform, customisable, and easily located, employers are equipped to support their talent to progress as quickly as possible – nurturing more fulfilled employees who want to stick around. ... Naturally, a lot of HR work is form-heavy, with anything from employee onboarding and promotions to progress reviews and remote working requests requiring HR input. However, with a content services platform, HR professionals can route and approve forms quickly, speeding up the process with digital forms that allow employees to enter information quickly and accurately. Going one step further, HR leaders can leverage automated workflows to route forms to approvers as soon as an employee completes them – cutting out the HR intermediary. ... Armed with a single source of truth, HR professionals can take advantage of automated workflows, enabling efficient notifications and streamlining HR compliance processes.


AI Could Turn Against You — Unless You Fix Your Data Trust Issues

Without unified standards for data formats, definitions, and validations, organizations struggle to establish centralized control. Legacy systems, often ill-equipped to handle modern data volumes, further exacerbate the problem. These systems were designed for periodic updates rather than the continuous, real-time streams demanded by AI, leading to inefficiencies and scalability limitations. To address these challenges, organizations must implement centralized governance, quality, and observability within a single framework. This enables them to leverage data lineage and track their data as it moves through systems to ensure transparency and identify issues in real-time. It also ensures they can regularly validate data integrity to support consistent, reliable AI models by conducting real-time quality checks. ... For organizations to maximize the potential of AI, they must embed data trust into their daily operations. This involves using automated systems like data observability to validate data integrity throughout its lifecycle, integrated governance to maintain reliability, and assuring continuous validation within evolving data ecosystems. By addressing data quality challenges and investing in unified platforms, organizations can transform data trust into a strategic advantage. 


Backdoor in Chinese-made healthcare monitoring device leaks patient data

“By reviewing the firmware code, the team determined that the functionality is very unlikely to be an alternative update mechanism, exhibiting highly unusual characteristics that do not support the implementation of a traditional update feature,” CISA said in its analysis report. “For example, the function provides neither an integrity checking mechanism nor version tracking of updates. When the function is executed, files on the device are forcibly overwritten, preventing the end customer — such as a hospital — from maintaining awareness of what software is running on the device.” In addition to this hidden remote code execution behavior, CISA also found that once the CMS8000 completes its startup routine, it also connects to that same IP address over port 515, which is normally associated with the Line Printer Daemon (LPD), and starts transmitting patient information without the device owner’s knowledge. “The research team created a simulated network, created a fake patient profile, and connected a blood pressure cuff, SpO2 monitor, and ECG monitor peripherals to the patient monitor,” the agency said. “Upon startup, the patient monitor successfully connected to the simulated IP address and immediately began streaming patient data to the address.”


3 Considerations for Mutual TLS (mTLS) in Cloud Security

Traditional security approaches often rely on IP whitelisting as a primary method of access control. While this technique can provide a basic level of security, IP whitelists operate on a fundamentally flawed assumption: that IP addresses alone can accurately represent trusted entities. In reality, this approach fails to effectively model real-world attack scenarios. IP whitelisting provides no mechanism for verifying the integrity or authenticity of the connecting service. It merely grants access based on network location, ignoring crucial aspects of identity and behavior. In contrast, mTLS addresses these shortcomings by focusing on cryptographic identity(link is external) rather than network location. ... In the realm of mTLS, identity is paramount. It's not just about encrypting data in transit; it's about ensuring that both parties in a communication are exactly who they claim to be. This concept of identity in mTLS warrants careful consideration. In a traditional network, identity might be tied to an IP address or a shared secret. But, in the modern world of cloud-native applications, these concepts fall short. mTLS shifts the mindset by basing identity on cryptographic certificates. Each service possesses its own unique certificate, which serves as its identity card.


Artificial Intelligence Versus the Data Engineer

It’s worth noting that there is a misconception that AI can prepare data for AI, when the reality is that, while AI can accelerate the process, data engineers are still needed to get that data in shape before it reaches the AI processes and models and we see the cool end results. At the same time, there are AI tools that can certainly accelerate and scale the data engineering work. So AI is both causing and solving the challenge in some respects! So, how does AI change the role of the data engineer? Firstly, the role of the data engineer has always been tricky to define. We sit atop a large pile of technology, most of which we didn’t choose or build, and an even larger pile of data we didn’t create, and we have to make sense of the world. Ostensibly, we are trying to get to something scientific. ... That art comes in the form of the intuition required to sift through the data, understand the technology, and rediscover all the little real-world nuances and history that over time have turned some lovely clean data into a messy representation of the real world. The real skill great data engineers have is therefore not the SQL ability but how they apply it to the data in front of them to sniff out the anomalies, the quality issues, the missing bits and those historical mishaps that must be navigated to get to some semblance of accuracy.


How engineering teams can thrive in 2025

Adopting a "fail forward" mentality is crucial as teams experiment with AI and other emerging technologies. Engineering teams are embracing controlled experimentation and rapid iteration, learning from failures and building knowledge. ... Top engineering teams will combine emerging technologies with new ways of working. They’re not just adopting AI—they’re rethinking how software is developed and maintained as a result of it. Teams will need to stay agile to lead the way. Collaboration within the business and access to a multidisciplinary talent base is the recipe for success. Engineering teams should proactively scenario plan to manage uncertainty by adopting agile frameworks like the "5Ws" (Who, What, When, Where, and Why.) This approach allows organizations to tailor tech adoption strategies and marry regulatory compliance with innovation. Engineering teams should also actively address AI bias and ensure fair and responsible AI deployment. Many enterprises are hiring responsible AI specialists and ethicists as regulatory standards are now in force, including the EU AI Act, which impacts organizations with users in the European Union. As AI improves, the expertise and technical skills that proved valuable before need to be continually reevaluated. Organizations that successfully adopt AI and emerging tech will thrive. 


Daily Tech Digest - January 31, 2025


Quote for the day:

“If you genuinely want something, don’t wait for it–teach yourself to be impatient.” -- Gurbaksh Chahal


GenAI fueling employee impersonation with biometric spoofs and counterfeit ID fraud

The annual AuthenticID report underlines the surging wave of AI-powered identity fraud, with rising biometric spoofs and counterfeit ID fraud attempts. The 2025 State of Identity Fraud Report also looks at how identity verification tactics and technology innovations are tackling the problem. “In 2024, we saw just how sophisticated fraud has now become: from deepfakes to sophisticated counterfeit IDs, generative AI has changed the identity fraud game,” said Blair Cohen, AuthenticID founder and president. ... “In 2025, businesses should embrace the mentality to ‘think like a hacker’ to combat new cyber threats,” said Chris Borkenhagen, chief digital officer and information security officer at AuthenticID. “Staying ahead of evolving strategies such as AI deepfake-generated documents and biometrics, emerging technologies, and bad actor account takeover tactics are crucial in protecting your business, safeguarding data, and building trust with customers.” ... Face biometric verification company iProov has identified the Philippines as a particular hotspot for digital identity fraud, with corresponding need for financial institutions and consumers to be vigilant. “There is a massive increase at the moment in terms of identity fraud against systems using generative AI in particular and deepfakes,” said iProove chief technology officer Dominic Forrest.


Cyber experts urge proactive data protection strategies

"Every organisation must take proactive measures to protect the critical data it holds," Montel stated. Emphasising foundational security practices, he advised organisations to identify their most valuable information and protect potential attack paths. He noted that simple steps can drastically contribute to overall security. On the consumer front, Montel highlighted the pervasive nature of data collection, reminding individuals of the importance of being discerning about the personal information they share online. "Think before you click," he advised, underscoring the potential of openly shared public information to be exploited by cybercriminals. Adding to the discussion on data resilience, Darren Thomson, Field CTO at Commvault, emphasised the changing landscape of cyber defence and recovery strategies needed by organisations. Thompson pointed out that mere defensive measures are not sufficient; rapid recovery processes are crucial to maintain business resilience in the event of a cyberattack. The concept of a "minimum viable company" is pivotal, where businesses ensure continuity of essential operations even when under attack. With cybercriminal tactics becoming increasingly sophisticated, doing away with reliance solely on traditional backups is necessary. 


Trump Administration Faces Security Balancing Act in Borderless Cyber Landscape

The borderless nature of cyber threats and AI, the scale of worldwide commerce, and the globally interconnected digital ecosystem pose significant challenges that transcend partisanship. As recent experience makes us all too aware, an attack originating in one country, state, sector, or company can spread almost instantaneously, and with devastating impact. Consequently, whatever the ideological preferences of the Administration, from a pragmatic perspective cybersecurity must be a collaborative national (and international) activity, supported by regulations where appropriate. It’s an approach taken in the European Union, whose member states are now subject to the Second Network Information Security Directive (NIS2)—focused on critical national infrastructure and other important sectors—and the financial sector-focused Digital Operational Resilience Act (DORA). Both regulations seek to create a rising tide of cyber resilience that lifts all ships and one of the core elements of both is a focus on reporting and threat intelligence sharing. In-scope organizations are required to implement robust measures to detect cyber attacks, report breaches in a timely way, and, wherever possible, share the information they accumulate on threats, attack vectors, and techniques with the EU’s central cybersecurity agency (ENISA).


Infrastructure as Code: From Imperative to Declarative and Back Again

Today, tools like Terraform CDK (TFCDK) and Pulumi have become popular choices among engineers. These tools allow developers to write IaC using familiar programming languages like Python, TypeScript, or Go. At first glance, this is a return to imperative IaC. However, under the hood, they still generate declarative configurations — such as Terraform plans or CloudFormation templates — that define the desired state of the infrastructure. Why the resurgence of imperative-style interfaces? The answer lies in a broader trend toward improving developer experience (DX), enabling self-service, and enhancing accessibility. Much like the shifts we’re seeing in fields such as platform engineering, these tools are designed to streamline workflows and empower developers to work more effectively. ... The current landscape represents a blending of philosophies. While IaC tools remain fundamentally declarative in managing state and resources, they increasingly incorporate imperative-like interfaces to enhance usability. The move toward imperative-style interfaces isn’t a step backward. Instead, it highlights a broader movement to prioritize developer accessibility and productivity, aligning with the emphasis on streamlined workflows and self-service capabilities.


How to Train AI Dragons to Solve Network Security Problems

We all know AI’s mantra: More data, faster processing, large models and you’re off to the races. But what if a problem is so specific — like network or DDoS security — that it doesn’t have a lot of publicly or privately available data you can use to solve it? As with other AI applications, the quality of the data you feed an AI-based DDoS defense system determines the accuracy and effectiveness of its solutions. To train your AI dragon to defend against DDoS attacks, you need detailed, real-world DDoS traffic data. Since this data is not widely and publicly available, your best option is to work with experts who have access to this data or, even better, have analyzed and used it to train their own AI dragons. To ensure effective DDoS detection, look at real-world, network-specific data and global trends as they apply to the network you want to protect. This global perspective adds valuable context that makes it easier to detect emerging or worldwide threats. ... Predictive AI models shine when it comes to detecting DDoS patterns in real-time. By using machine learning techniques such as time-series analysis, classification and regression, they can recognize patterns of attacks that might be invisible to human analysts. 


How law enforcement agents gain access to encrypted devices

When a mobile device is seized, law enforcement can request the PIN, password, or biometric data from the suspect to access the phone if they believe it contains evidence relevant to an investigation. In England and Wales, if the suspect refuses, the police can give a notice for compliance, and a further refusal is in itself a criminal offence under the Regulation of Investigatory Powers Act (RIPA). “If access is not gained, law enforcement use forensic tools and software to unlock, decrypt, and extract critical digital evidence from a mobile phone or computer,” says James Farrell, an associate at cyber security consultancy CyXcel. “However, there are challenges on newer devices and success can depend on the version of operating system being used.” ... Law enforcement agencies have pressured companies to create “lawful access” solutions, particularly on smartphones, to take Apple as an example. “You also have the co-operation of cloud companies, which if backups are held can sidestep the need to break the encryption of a device all together,” Closed Door Security’s Agnew explains. The security community has long argued against law enforcement backdoors, not least because they create security weaknesses that criminal hackers might exploit. “Despite protests from law enforcement and national security organizations, creating a skeleton key to access encrypted data is never a sensible solution,” CreateFuture’s Watkins argues.


The quantum computing reality check

Major cloud providers have made quantum computing accessible through their platforms, which creates an illusion of readiness for enterprise adoption. However, this accessibility masks a fatal flaw: Most quantum computing applications remain experimental. Indeed, most require deep expertise in quantum physics and specialized programming knowledge. Real-world applications are severely limited, and the costs are astronomical compared to the actual value delivered. ... The timeline to practical quantum computing applications is another sobering reality. Industry experts suggest we’re still 7 to 15 years away from quantum systems capable of handling production workloads. This extended horizon makes it difficult to justify significant investments. Until then, more immediate returns could be realized through existing technologies. ... The industry’s fascination with quantum computing has made companies fear being left behind or, worse, not being part of the “cool kids club”; they want to deliver extraordinary presentations to investors and customers. We tend to jump into new trends too fast because the allure of being part of something exciting and new is just too compelling. I’ve fallen into this trap myself. ... Organizations must balance their excitement for quantum computing with practical considerations about immediate business value and return on investment. I’m optimistic about the potential value in QaaS. 


Digital transformation in banking: Redefining the role of IT-BPM services

IT-BPM services are the engine of digital transformation in banking. They streamline operations through automation technologies like RPA, enhancing efficiency in processes such as customer onboarding and loan approvals. This automation reduces errors and frees up staff for strategic tasks like personalised customer support. By harnessing big data analytics, IT-BPM empowers banks to personalise services, detect fraud, and make informed decisions, ultimately improving both profitability and customer satisfaction. Robust security measures and compliance monitoring are also integral, ensuring the protection of sensitive customer data in the increasingly complex digital landscape. ... IT-BPM services are crucial for creating seamless, multi-channel customer experiences. They enable the development of intuitive platforms, including AI-driven chatbots and mobile apps, providing instant support and convenient financial management. This focus extends to personalised services tailored to individual customer needs and preferences, and a truly integrated omnichannel experience across all banking platforms. Furthermore, IT-BPM fosters agility and innovation by enabling rapid development of new digital products and services and facilitating collaboration with fintech companies.


Revolutionizing data management: Trends driving security, scalability, and governance in 2025

Artificial Intelligence and Machine Learning transform traditional data management paradigms by automating labour-intensive processes and enabling smarter decision-making. In the upcoming years, augmented data management solutions will drive efficiency and accuracy across multiple domains, from data cataloguing to anomaly detection. AI-driven platforms process vast datasets to identify patterns, automating tasks like metadata tagging, schema creation and data lineage mapping. ... In 2025, data masking will not be merely a compliance tool for GDPR, HIPPA, or CCPA; it will be a strategic enabler. With the rise in hybrid and multi-cloud environments, businesses will increasingly need to secure sensitive data across diverse systems. Specific solutions like IBM, K2view, Oracle and Informatica will revolutionize data masking by offering scale-based, real-time, context-aware masking. ... Real-time integration enhances customer experiences through dynamic pricing, instant fraud detection, and personalized recommendations. These capabilities rely on distributed architectures designed to handle diverse data streams efficiently. The focus on real-time integration extends beyond operational improvements. 


Deploying AI at the edge: The security trade-offs and how to manage them

The moment you bring compute nodes into the far edge, you’re automatically exposing a lot of security challenges in your network. Even if you expect them to be “disconnected devices,” they could intermittently connect to transmit data. So, your security footprint is expanded. You must ensure that every piece of the stack you’re deploying at the edge is secure and trustworthy, including the edge device itself. When considering security for edge AI, you have to think about transmitting the trained model, runtime engine, and application from a central location to the edge, opening up the opportunity for a person-in-the-middle attack. ... In military operations, continuous data streams from millions of global sensors generate an overwhelming volume of information. Cloud-based solutions are often inadequate due to storage limitations, processing capacity constraints, and unacceptable latency. Therefore, edge computing is crucial for military applications, enabling immediate responses and real-time decision-making. In commercial settings, many environments lack reliable or affordable connectivity. Edge AI addresses this by enabling local data processing, minimizing the need for constant communication with the cloud. This localized approach enhances security. Instead of transmitting large volumes of raw data, only essential information is sent to the cloud. 


Daily Tech Digest - January 30, 2025


Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley


Doing authentication right

Like encryption, authentication is one of those things that you are tempted to “roll your own” but absolutely should not. The industry has progressed enough that you should definitely “buy and not build” your authentication solution. Plenty of vendors offer easy-to-implement solutions and stay diligently on top of the latest security issues. Authentication also becomes a tradeoff between security and a good user experience. ... Passkeys are a relatively new technology and there is a lot of FUD floating around out there about them. The bottom line is that they are safe, secure, and easy for your users. They should be your primary way of authenticating. Several vendors make implementing passkeys not much harder than inserting a web component in your application. ... Forcing users to use hard-to-remember passwords means they will be more likely to write them down or use a simple password that meets the requirements. Again, it may seem counterintuitive, but XKCD has it right. In addition, the longer the password, the harder it is to crack. Let your users create long, easy-to-remember passwords rather than force them to use shorter, difficult-to-remember passwords. ... Six digits is the outer limit for OTP links, and you should consider shorter ones. Under no circumstances should you require OTPs longer than six digits because they are vastly harder for users to keep in short-term memory.


Augmenting Software Architects with Artificial Intelligence

Technical debt is mistakenly thought of as just a source code problem, but the concept is also applicable to source data (this is referred to as data debt) as well as your validation assets. AI has been used for years to analyze existing systems to identify potential opportunities to improve the quality (to pay down technical debt). SonarQube, CAST SQG and BlackDuck’s Coverity Static Analysis statically analyze existing code. Applitools Visual AI dynamically finds user interface (UI) bugs and Veracode’s DAST to find runtime vulnerabilities in web apps. The advantages of this use case are that it pinpoints aspects of your implementation that potentially should be improved. As described earlier, AI tooling offers to the potential for greater range, thoroughness, and trustworthiness of the work products as compared with that of people. Drawbacks to using AI-tooling to identify technical debt include the accuracy, IP, and privacy risks described above. ... As software architects we regularly work with legacy implementations that they need to leverage and often evolve. This software is often complex, using a myriad of technologies for reasons that have been forgotten over time. Tools such as CAST Imaging visualizes existing code and ChartDB visualizes legacy data schemas to provide a “birds-eye view” of the actual situation that you face.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

Your first step should be to evaluate the state of your company’s cyber defenses, including communications and IT infrastructure, and the cybersecurity measures you already have in place—identifying any vulnerabilities and gaps. One vulnerability to watch for is a dependence on multiple security platforms, patches, policies, hardware, and software, where a lack of tight integration can create gaps that hackers can readily exploit. Consider using operational resilience assessment software as part of the exercise, and if you lack the internal know-how or resources to manage the assessment, consider enlisting a third-party operational resilience risk consultant. ... Aging network communications hardware and software, including on-premises systems and equipment, are top targets for hackers during a disaster because they often include a single point of failure that’s readily exploitable. The best counter in many cases is to move the network and other key communications infrastructure (a contact center, for example) to the cloud. Not only do cloud-based networks such as SD-WAN, (software-defined wide area network) have the resilience and flexibility to preserve connectivity during a disaster, they also tend to come with built-in cybersecurity measures.


California’s AG Tells AI Companies Practically Everything They’re Doing Might Be Illegal

“The AGO encourages the responsible use of AI in ways that are safe, ethical, and consistent with human dignity,” the advisory says. “For AI systems to achieve their positive potential without doing harm, they must be developed and used ethically and legally,” it continues, before dovetailing into the many ways in which AI companies could, potentially, be breaking the law. ... There has been quite a lot of, shall we say, hyperbole, when it comes to the AI industry and what it claims it can accomplish versus what it can actually accomplish. Bonta’s office says that, to steer clear of California’s false advertising law, companies should refrain from “claiming that an AI system has a capability that it does not; representing that a system is completely powered by AI when humans are responsible for performing some of its functions; representing that humans are responsible for performing some of a system’s functions when AI is responsible instead; or claiming without basis that a system is accurate, performs tasks better than a human would, has specified characteristics, meets industry or other standards, or is free from bias.” ... Bonta’s memo clearly illustrates what a legal clusterfuck the AI industry represents, though it doesn’t even get around to mentioning U.S. copyright law, which is another legal gray area where AI companies are perpetually running into trouble.


Knowledge graphs: the missing link in enterprise AI

Knowledge graphs are a layer of connective tissue that sits on top of raw data stores, turning information into contextually meaningful knowledge. So in theory, they’d be a great way to help LLMs understand the meaning of corporate data sets, making it easier and more efficient for companies to find relevant data to embed into queries, and making the LLMs themselves faster and more accurate. ... Knowledge graphs reduce hallucinations, he says, but they also help solve the explainability challenge. Knowledge graphs sit on top of traditional databases, providing a layer of connection and deeper understanding, says Anant Adya, EVP at Infosys. “You can do better contextual search,” he says. “And it helps you drive better insights.” Infosys is now running proof of concepts to use knowledge graphs to combine the knowledge the company has gathered over many years with gen AI tools. ... When a knowledge graph is used as part of the RAG infrastructure, explicit connections can be used to quickly zero in on the most relevant information. “It becomes very efficient,” said Duvvuri. And companies are taking advantage of this, he says. “The hard question is how many of those solutions are seen in production, which is quite rare. But that’s true of a lot of gen AI applications.”


U.S. Copyright Office says AI generated content can be copyrighted — if a human contributes to or edits it

The Copyright Office determined that prompts are generally instructions or ideas rather than expressive contributions, which are required for copyright protection. Thus, an image generated with a text-to-image AI service such as Midjourney or OpenAI’s DALL-E 3 (via ChatGPT), on its own could not qualify for copyright protection. However, if the image was used in conjunction with a human-authored or human-edited article (such as this one), then it would seem to qualify. Similarly, for those looking to use AI video generation tools such as Runway, Pika, Luma, Hailuo, Kling, OpenAI Sora, Google Veo 2 or others, simply generating a video clip based on a description would not qualify for copyright. Yet, a human editing together multiple AI generated video clips into a new whole would seem to qualify. The report also clarifies that using AI in the creative process does not disqualify a work from copyright protection. If an AI tool assists an artist, writer or musician in refining their work, the human-created elements remain eligible for copyright. This aligns with historical precedents, where copyright law has adapted to new technologies such as photography, film and digital media. ... While some had called for additional protections for AI-generated content, the report states that existing copyright law is sufficient to handle these issues.


From connectivity to capability: The next phase of private 5G evolution

Faster connectivity is just one positive aspect of private 5G networks; they are the basis of the current digital era. These networks outperform conventional public 5G capabilities, giving businesses incomparable control, security, and flexibility. For instance, private 5G is essential to the seamless connection of billions of devices, ensuring ultra-low latency and excellent reliability in the worldwide IoT industry, which has the potential to reach $650.5 billion by 2026, as per Markets and Markets. Take digital twins, for example—virtual replicas of physical environments such as factories or entire cities. These replicas require real-time data streaming and ultra-reliable bandwidth to function effectively. Private 5G enables this by delivering consistent performance, turning theoretical models into practical tools that improve operational efficiency and decision-making. ... Also, for sectors that rely on efficiency and precision, the private 5G is making big improvements in this area. For instance, in the logistics sector, it connects fleets, warehouses, and ports with fast, low-latency networks, streamlining operations throughout the supply chain. In fleet management, private 5G allows real-time tracking of vehicles, improving route planning and fuel use. 


American CISOs should prepare now for the coming connected-vehicle tech bans

The rule BIS released is complex and intricate and relies on many pre-existing definitions and policies used by the Commerce Department for different commercial and industrial matters. However, in general, the restrictions and compliance obligations under the rule affect the entire US automotive industry, including all-new, on-road vehicles sold in the United States (except commercial vehicles such as heavy trucks, for which rules will be determined later.) All companies in the automotive industry, including importers and manufacturers of CVs, equipment manufacturers, and component suppliers, will be affected. BIS said it may grant limited specific authorizations to allow mid-generation CV manufacturers to participate in the rule’s implementation period, provided that the manufacturers can demonstrate they are moving into compliance with the next generation. ... Connected vehicles and related component suppliers are required to scrutinize the origins of vehicle connectivity systems (VCS) hardware and automated driving systems (ADS) software to ensure compliance. Suppliers must exclude components with links to the PRC or Russia, which has significant implications for sourcing practices and operational processes.


What to know about DeepSeek AI, from cost claims to data privacy

"Users need to be aware that any data shared with the platform could be subject to government access under China's cybersecurity laws, which mandate that companies provide access to data upon request by authorities," Adrianus Warmenhoven, a member of NordVPN's security advisory board, told ZDNET via email. According to some observers, the fact that R1 is open-source means increased transparency, giving users the opportunity to inspect the model's source code for signs of privacy-related activity. Regardless, DeepSeek also released smaller versions of R1, which can be downloaded and run locally to avoid any concerns about data being sent back to the company (as opposed to accessing the chatbot online). ... "DeepSeek's new AI model likely does use less energy to train and run than larger competitors' models," confirms Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "However, I doubt this marks the start of a long-term trend in lower energy consumption. AI's power stems from data, algorithms, and compute -- which rely on ever-improving chips. When developers have previously found ways to be more efficient, they have typically reinvested those gains into making even bigger, more powerful models, rather than reducing overall energy usage."


The AI Imperative: How CIOs Can Lead the Charge

For CIOs, AGI will take this to the next level. Imagine systems that don't just fix themselves but also strategize, optimize and innovate. AGI could automate 90% of IT operations, freeing up teams to focus on strategic initiatives. It could revolutionize cybersecurity by anticipating and neutralizing threats before they strike. It could transform data into actionable insights, driving smarter decisions across the organization. The key is to begin incrementally, prove the value and scale strategically. AGI isn't just a tool; it's a game-changer. ... Cybersecurity risks are real and imminent. Picture this: you're using an open-source AI model and suddenly, your system gets hacked. Turns out, a malicious contributor slipped in some rogue code. Sounds like a nightmare, right? Open-source AI is powerful, but has its fair share of risks. Vulnerabilities in the code, supply chain attacks and lack of appropriate vendor support are absolutely real concerns. But this is true for any new technology. With the right safeguards, we can minimize and mitigate these risks. Here's what I recommend: Regularly review and update open-source libraries. CIOs should encourage their teams to use tools like software composition analysis to detect suspicious changes. Train your team to manage and secure open-source AI deployments. 

Daily Tech Digest - January 29, 2025


Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


Evil Models and Exploits: When AI Becomes the Attacker

A more structured threat emerges with technologies like the Model Context Protocol (MCP). Originally introduced by Anthropic, MCP allows large language models (LLMs) to interact with host machines via JavaScript APIs. This enables LLMs to perform sophisticated operations by controlling local resources and services. While MCP is being embraced by developers for legitimate use cases, such as automation and integration, its darker implications are clear. An MCP-enabled system could orchestrate a range of malicious activities with ease. Think of it as an AI-powered operator capable of executing everything from reconnaissance to exploitation. ... The proliferation of AI models is both a blessing and a curse. Platforms like Hugging Face host over a million models, ranging from state-of-the-art neural networks to poorly designed or maliciously altered versions. Amid this abundance lies a growing concern: model provenance. Imagine a widely used model, fine-tuned by a seemingly reputable maintainer, turning out to be a tool of a state actor. Subtle modifications in the training data set or architecture could embed biases, vulnerabilities or backdoors. These “evil models” could then be distributed as trusted resources, only to be weaponized later. This risk underscores the need for robust mechanisms to verify the origins and integrity of AI models.


The tipping point for Generative AI in banking

Advancements in AI are allowing banks and other fintechs to embed the technology across their entire value chain. For example, TBC is leveraging AI to make 42% of all payment reminder calls to customers with loans that are up to 30 days or less overdue and is getting ready to launch other AI-enabled solutions. Customers normally cannot differentiate the AI calls powered by our tech from calls by humans, even as the AI calls are ten times more efficient for TBC’s bottom line, compared with human operator calls. Klarna rolled out an AI assistant, which handled 2.3 million conversations in its first month of operation, which accounts for two-thirds of Klarna’s customer service chats or the workload of 700 full-time agents, the company estimated. Deutsche Bank leverages generative AI for software creation and managing adverse media, while the European neobank Bunq applies it to detect fraud. Even smaller regional players, provided they have the right tech talent in place, will soon be able to deploy Gen AI at scale and incorporate the latest innovations into their operations. Next year is set to be a watershed year when this step change will create a clear division in the banking sector between AI-enabled champions and other players that will soon start lagging behind. 


Want to be an effective cybersecurity leader? Learn to excel at change management

Security should never be an afterthought; the change management process shouldn’t be, either, says Michael Monday, a managing director in the security and privacy practice at global consulting firm Protiviti. “The change management process should start early, before changing out the technology or process,” he says. “There should be some messages going out to those who are going to be impacted letting them know, [otherwise] users will be surprised, they won’t know what’s going on, business will push back and there will be confusion.” ... “It’s often the CISO who now has to push these new things,” says Moyle, a former CISO, founding partner of the firm SecurityCurve, and a member of the Emerging Trends Working Group with the professional association ISACA. In his experience, Moyle says he has seen some workers more willing to change than others and learned to enlist those workers as allies to help him achieve his goals. ... When it comes to the people portion, she tells CISOs to “feed supporters and manage detractors.” As for process, “identify the key players for the security program and understand their perspective. There are influencers, budget holders, visionaries, and other stakeholders — each of which needs to be heard, and persuaded, especially if they’re a detractor.”


Preparing financial institutions for the next generation of cyber threats

Collaboration between financial institutions, government agencies, and other sectors is crucial in combating next-generation threats. This cooperative approach enhances the ability to detect, respond to, and mitigate sophisticated threats more effectively. Visa regularly works with international agencies of all sizes to bring cybercriminals to justice. In fact, Visa regularly works alongside law enforcement, including the US Department of Justice, FBI, Secret Service and Europol, to help identify and apprehend fraudsters and other criminals. Visa uses its AI and ML capabilities to identify patterns of fraud and cybercrime and works with law enforcement to find these bad actors and bring them to justice. ... Financial institutions face distinct vulnerabilities compared to other industries, particularly due to their role in critical infrastructure and financial ecosystems. As high-value targets, they manage large sums of money and sensitive information, making them prime targets for cybercriminals. Their operations involve complex and interconnected systems, often including legacy technologies and numerous third-party vendors, which can create security gaps. Regulatory and compliance challenges add another layer of complexity, requiring stringent data protection measures to avoid hefty fines and maintain customer trust.


Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025

Enterprises increasingly turned to AI-native security solutions, employing continuous multi-factor authentication and identity verification tools. These technologies monitor behavioral patterns or other physical world signals to prove identity —innovations that can now help prevent incidents like the North Korean hiring scheme. However, hackers may now gain another inside route to enterprise security. The new breed of unregulated and offshore LLMs like DeepSeek creates new opportunities for attackers. In particular, using DeepSeek’s AI model gives attackers a powerful tool to better discover and take advantage of the cyber vulnerabilities of any organization. ... Deepfake technology continues to blur the lines between reality and fiction. ... Organizations must combat the increasing complexity of identity fraud, hackers, cyber security thieves, and data center poachers each year. In addition to all of the threats mentioned above, 2025 will bring an increasing need to address IoT and OT security issues, data protection in the third-party cloud and AI infrastructure, and the use of AI agents in the SOC. To help thwart this year’s cyber threats, CISOs and CTOs must work together, communicate often, and identify areas to minimize risks for deepfake fraud across identity, brand protection, and employee verification.


The Product Model and Agile

First, the product model is not new; it’s been out there for more than 20 years. So I have never argued that the product model is “the next new thing,” as I think that’s not true. Strong product companies have been following the product model for decades, but most companies around the world have only recently been exposed to this model, which is why so many people think of it as new. Second, while I know this irritates many people, today there are very different definitions of what it even means to be “Agile.” Some people consider SAFe as Agile. If that’s what you consider Agile, then I would say that Agile plays no part in the product model, as SAFe is pretty much the antithesis of the product model. This difference is often characterized today as “fake Agile” versus “real Agile.” And to be clear, if you’re running XP, or Kanban, or Scrum, or even none of the Agile ceremonies, yet you are consistently doing continuous deployment, then at least as far as I’m concerned, you’re running “real Agile.” Third, we should separate the principles of Agile from the various, mostly project management, processes that have been set up around those principles. ... Finally, it’s also important to point out that there is one Agile principle that might be good enough for custom or contract software work, but is not sufficient for commercial product work. This is the principle that “working software is the primary measure of progress.”


Next Generation Observability: An Architectural Introduction

It's always a challenge when creating architectural content, trying to capture real-world stories into a generic enough format to be useful without revealing any organization's confidential implementation details. We are basing these architectures on common customer adoption patterns. That's very different from most of the traditional marketing activities usually associated with generating content for the sole purpose of positioning products for solutions. When you're basing the content on actual execution in solution delivery, you're cutting out the marketing chuff. This observability architecture provides us with a way to map a solution using open-source technologies focusing on the integrations, structures, and interactions that have proven to work at scale. Where those might fail us at scale, we will provide other options. What's not included are vendor stories, which are normal in most marketing content. Those stories that, when it gets down to implementation crunch time, might not fully deliver on their promises. Let's look at the next-generation observability architecture and explore its value in helping our solution designs. The first step is always to clearly define what we are focusing on when we talk about the next-generation observability architecture.


AI SOC Analysts: Propelling SecOps into the future

Traditional, manual SOC processes already struggling to keep pace with existing threats are far outpaced by automated, AI-powered attacks. Adversaries are using AI to launch sophisticated and targeted attacks putting additional pressure on SOC teams. To defend effectively, organizations need AI solutions that can rapidly sort signals from noise and respond in real time. AI-generated phishing emails are now so realistic that users are more likely to engage with them, leaving analysts to untangle the aftermath—deciphering user actions and gauging exposure risk, often with incomplete context. ... The future of security operations lies in seamless collaboration between human expertise and AI efficiency. This synergy doesn't replace analysts but enhances their capabilities, enabling teams to operate more strategically. As threats grow in complexity and volume, this partnership ensures SOCs can stay agile, proactive, and effective. ... Triaging and investigating alerts has long been a manual, time-consuming process that strains SOC teams and increases risk. Prophet Security changes that. By leveraging cutting-edge AI, large language models, and advanced agent-based architectures, Prophet AI SOC Analyst automatically triages and investigates every alert with unmatched speed and accuracy.


Apple researchers reveal the secret sauce behind DeepSeek AI

The ability to use only some of the total parameters of a large language model and shut off the rest is an example of sparsity. That sparsity can have a major impact on how big or small the computing budget is for an AI model. AI researchers at Apple, in a report out last week, explain nicely how DeepSeek and similar approaches use sparsity to get better results for a given amount of computing power. Apple has no connection to DeepSeek, but Apple does its own AI research on a regular basis, and so the developments of outside companies such as DeepSeek are part of Apple's continued involvement in the AI research field, broadly speaking. In the paper, titled "Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models," posted on the arXiv pre-print server, lead author Samir Abnar of Apple and other Apple researchers, along with collaborator Harshay Shah of MIT, studied how performance varied as they exploited sparsity by turning off parts of the neural net. ... Abnar and team ask whether there's an "optimal" level for sparsity in DeepSeek and similar models, meaning, for a given amount of computing power, is there an optimal number of those neural weights to turn on or off? It turns out you can fully quantify sparsity as the percentage of all the neural weights you can shut down, with that percentage approaching but never equaling 100% of the neural net being "inactive."


What Data Literacy Looks Like in 2025

“The foundation of data literacy lies in having a basic understanding of data. Non-technical people need to master the basic concepts, terms, and types of data, and understand how data is collected and processed,” says Li. “Meanwhile, data literacy should also include familiarity with data analysis tools. ... “Organizations should also avoid the misconception that fostering GenAI literacy alone will help developing GenAI solutions. For this, companies need even greater investments in expert AI talent -- data scientists, machine learning engineers, data engineers, developers and AI engineers,” says Carlsson. “While GenAI literacy empowers individuals across the workforce, building transformative AI capabilities requires skilled teams to design, fine-tune and operationalize these solutions. Companies must address both.” ... “Data literacy in 2025 can’t just be about enabling employees to work with data. It needs to be about empowering them to drive real business value,” says Jain. “That’s how organizations will turn data into dollars and ensure their investments in technology and training actually pay off.” ... “Organizations can embed data literacy into daily operations and culture by making data-driven thinking a core part of every role,” says Choudhary.