Showing posts with label botnet. Show all posts
Showing posts with label botnet. Show all posts

Daily Tech Digest - April 30, 2026


Quote for the day:

"You've got to get up every morning with determination if you're going to go to bed with satisfaction." --George Lorimer

🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 15 mins • Perfect for listening on the go.


The dreaded IT audit: How to get through it and what to avoid

The article "The dreaded IT audit: how to get through it and what to avoid" from IT Pro encourages organizations to reframe the auditing process as a strategic business asset rather than a burdensome cost center. Successfully navigating an audit requires maintaining a comprehensive, up-to-date inventory of all technology assets—including those used by remote workforces—to ensure security, safety, and insurance compliance. Even startups should establish structured auditing processes, as these evaluations proactively identify vulnerabilities and optimize operational efficiency. To streamline the experience, the article recommends prioritizing high-risk areas, such as software licensing, and utilizing customized spot checks instead of repetitive, standardized reviews that may fail to uncover meaningful insights. Crucially, leaders must adopt an open-minded approach to findings; the goal is to engage in transparent discussions about discovered issues rather than becoming defensive. Key pitfalls to avoid include treating the audit as a one-time administrative hurdle, relying on outdated manual tracking methods, and ignoring the gathered data. Instead, organizations should leverage audit results to inform staff training and drive practical improvements. By viewing the audit as a strategic opportunity for growth, companies can significantly strengthen their cybersecurity posture and ensure long-term sustainability in a digital economy.


Privacy in the AI era is possible, says Proton's CEO, but one thing keeps him up at night

In a wide-ranging interview at the Semafor World Economy Summit, Proton CEO Andy Yen addressed the critical tension between the rapid advancement of artificial intelligence and the fundamental right to digital privacy. Yen voiced significant concerns regarding the current AI trajectory, arguing that the industry's reliance on massive data harvesting inherently threatens individual security. He advocated for a paradigm shift toward "privacy-first AI," where processing occurs locally on user devices or through end-to-end encrypted frameworks to ensure that personal information remains inaccessible to service providers. Unlike the advertising-driven models of Silicon Valley giants, Yen highlighted Proton’s commitment to a subscription-based business model, which avoids the ethical pitfalls of monetizing user data. He also explored the "privacy paradox," observing that while users value their data, they often succumb to the convenience of free platforms. To counter this, Proton is expanding its ecosystem with tools like encrypted email and small language models designed specifically for security. Ultimately, Yen emphasized that the future of the digital economy hinges on stricter regulatory enforcement and the adoption of decentralized technologies that empower users with absolute control over their information, rather than treating them as products to be sold.


Outsourcing contracts weren't built for AI. CIOs are renegotiating now

The rapid advancement of generative artificial intelligence is necessitating a major overhaul of IT outsourcing agreements, as traditional contracts centered on headcount and billable hours prove incompatible with AI-driven efficiency. This InformationWeek article explains that while service providers promise productivity gains of up to 70%, legacy full-time equivalent (FTE) models fail to account for this increased output, leading CIOs to aggressively renegotiate for outcome-based pricing. This shift allows organizations to pay for specific results rather than human time, yet it introduces significant legal complexities. Key concerns include data sovereignty—where proprietary data might inadvertently train a provider's large language model—and intellectual property risks regarding the ownership of AI-generated code. Furthermore, the ability of AI to automate routine tasks is prompting some enterprises to bring previously outsourced functions back in-house, as smaller internal teams can now manage workloads that once required massive offshore cohorts. To navigate these challenges, technical leaders are implementing "gain-sharing" frameworks and rigorous governance standards to manage risks like AI hallucinations and liability. Ultimately, CIOs are assuming a more central role in procurement to ensure that vendor incentives align with genuine innovation and that the financial benefits of automation are captured by the enterprise.


Bad bots make up 40% of internet traffic

The "2026 Thales Bad Bot Report: Bad Bots in the Agentic Age" reveals a transformative shift in internet traffic, where automated activity now accounts for 53% of all web interactions, surpassing human traffic for the second consecutive year. Malicious "bad bots" alone comprise 40% of global traffic, highlighting a growing threat landscape. A critical finding is the 12.5x surge in AI-driven bot attacks, fueled by the rapid adoption of agentic AI which blurs the lines between legitimate and harmful automation. These advanced bots are increasingly targeting APIs, with 27% of attacks now bypassing traditional interfaces to exploit backend logic directly at machine speed. The financial services sector remains the most vulnerable, suffering 24% of all bot attacks and nearly half of all account takeover incidents. Thales experts, including Tim Chang, emphasize that the primary security challenge has evolved from simple bot identification to the complex analysis of behavioral intent. As AI agents emerge as a new traffic category, organizations must transition to proactive, intent-based defenses that can distinguish between helpful AI agents and malicious automation. This machine-driven era necessitates deeper visibility into API traffic and identity systems to maintain trust and security across modern digital infrastructures.


Incentive drift: Why transformation fails even when everything looks green

In the article "Incentive Drift: Why Transformation Fails Even When Everything Looks Green," Mehdi Kadaoui explores the paradoxical failure of IT transformations that appear successful on paper. The central challenge is "incentive drift"—the structural separation of authority from accountability that leads organizations to optimize for project delivery rather than business value. This drift manifests through several destructive patterns: the "ownership vacuum," where strategy and execution are disconnected; the "budgetary firewall," which isolates capital spending from operational costs; and "language capture," where success definitions are subtly redefined to ensure "green" status. Kadaoui argues that "collective amnesia" often follows, as organizations quietly lower their expectations to avoid acknowledging failure. To resolve this, he proposes making drift "structurally expensive" through three key mechanisms. First, a "value prenup" requires operational leaders to explicitly own and sign off on intended outcomes before development begins. Second, a "cost mirror" forces transparency across budget ledgers. Finally, a "semantic anchor" ensures original goals are read aloud in every governance meeting to prevent meaning erosion. By grounding digital transformation in rigid accountability and linguistic clarity, leadership can ensure that technological outputs translate into genuine, durable enterprise value.


How to Be a Great Data Steward: 6 Core Skills to Build

The article "Core Data Stewardship Skills to Build" emphasizes that effective data stewardship requires a unique blend of technical proficiency, business acumen, and interpersonal skills. High-performing stewards act as "purple people," bridging the gap between IT and business by translating complex technical standards into actionable business practices. Key operational activities include identifying and documenting Critical Data Elements (CDEs), aligning them with precise business terms, and performing data profiling to identify quality issues. Beyond basic documentation, stewards must master data classification to ensure regulatory compliance with frameworks like GDPR or HIPAA. Analytical thinking is essential for interpreting patterns and uncovering root causes of data inconsistencies, while strong communication skills enable stewards to foster a collaborative, data-driven culture. Furthermore, literacy in adjacent domains such as metadata management, master data management (MDM), and the use of modern data catalogs is vital. Ultimately, the role is outcome-driven; stewards do not just manage data for its own sake but focus on ensuring data health to drive measurable organizational value. By combining attention to detail with strategic consistency, data stewards serve as the essential operational guardians who transform raw data into a reliable, high-quality strategic asset for their organizations.


Researchers unearth industrial sabotage malware that predated Stuxnet by 5 years

Researchers from SentinelOne recently uncovered a sophisticated malware framework, dubbed "Fast16," that predates the infamous Stuxnet worm by five years. Active as early as 2005, this discovery shifts the timeline of state-sponsored industrial sabotage, proving that nation-states were deploying cyberweapons against physical infrastructure much earlier than previously understood. Unlike typical espionage tools designed for data theft, Fast16 was engineered for strategic sabotage by targeting high-precision floating-point arithmetic operations within engineering modeling software. By corrupting the logic of the Floating Point Unit (FPU), the malware produced subtly altered outputs in complex simulations, potentially leading to catastrophic real-world failures. The researchers identified three specific targeted engineering programs, including one previously associated with Iran’s AMAD nuclear program and another widely used in Chinese structural design. The modular nature of Fast16, which utilizes encrypted Lua bytecode, underscores its advanced design and national importance. This finding highlights a historical precedent for cyberattacks on critical workloads in fields such as advanced physics and nuclear research. Ultimately, Fast16 serves as a significant harbinger for modern industrial sabotage, demonstrating that the transition from strategic espionage to physical disruption in cyberspace was already in full swing two decades ago, long before Stuxnet gained global notoriety.


How AI Is Transforming Business Continuity and Crisis Response

Charlie Burgess’s article, "How AI Is Transforming Business Continuity and Crisis Response," explores the pivotal role of artificial intelligence in navigating the complexities of modern digital and physical risks. As businesses face increasingly non-linear threats, from supply chain disruptions to cyber incidents, the abundance of generated data often leads to information overload. AI addresses this by acting as a sophisticated data analysis tool that parses vast information streams to identify hidden patterns and suppress low-priority noise. This allows crisis teams to focus on critical alerts and early warning signs. Furthermore, AI enhances situational awareness and coordination by correlating disparate system inputs and surfacing standardized playbook responses. During active incidents, technologies like AI-powered cameras provide real-time visibility, aiding in personnel safety and evacuation efforts. Beyond immediate response, AI suggests optimized recovery paths and strategic resource allocation, fostering long-term operational resilience. Ultimately, the integration of AI is not intended to replace human judgment but to empower decision-makers with actionable insights and agility. By bridging the gap between data collection and decisive action, AI transforms business continuity from a reactive necessity into a proactive, evidence-based strategic asset that safeguards both personnel and organizational stability in an unpredictable global landscape.


Europe Gliding Toward Mandatory Online Age Verification

The European Commission is accelerating its push toward mandatory online age verification, driven by the Digital Services Act's requirements to protect minors from harmful content. Central to this initiative is a new age assurance framework and a "technically ready" open-source mobile app designed to allow users to prove they are over a certain age using national identity documents without disclosing their full identity. However, this transition faces intense scrutiny. Security researchers recently identified significant vulnerabilities in the commission's prototype app, labeling it "easily hackable." Furthermore, privacy advocates, such as representatives from Tuta, warn that centralized age verification creates a lucrative "gold mine" for hackers, potentially exacerbating risks like phishing and identity theft. Despite these concerns, European officials like Henna Virkkunen emphasize that the DSA demands concrete action over mere terms of service, particularly following allegations that platforms like Meta have failed to adequately exclude children under thirteen. As several European nations consider raising minimum age requirements for social media, the commission continues to advocate for "robust and non-discriminatory" verification tools that can be integrated into national digital wallets, insisting that ongoing security testing will eventually yield a reliable solution for safeguarding the digital environment for children.


CodeGuardian: A Model Context Protocol Server for AI-Assisted Code Quality Analysis and Security Scanning

"CodeGuardian: A Model Context Protocol Server for AI-Assisted Code Quality Analysis and Security Scanning" introduces a breakthrough tool designed to integrate enterprise-grade security and quality checks directly into AI-powered development environments. Authored by Madhvesh Kumar and Deepika Singh, the article details how CodeGuardian leverages the Model Context Protocol (MCP) to extend coding assistants with eleven specialized analysis tools. This integration eliminates the friction of context-switching by allowing developers to execute security scans, identify hardcoded secrets across multiple layers, and generate compliant Software Bill of Materials (SBOM) using simple natural language prompts. Unlike traditional static analysis tools that merely flag issues, CodeGuardian provides context-aware, "drop-in" code remediations tailored to a project's specific framework and style. A core feature is its cross-layer security reporting, which aggregates findings into a single risk score, exposing systemic vulnerabilities that isolated scanners often miss. By shifting security "left" into the immediate coding workflow, the tool empowers developers to build more resilient software while maintaining high delivery velocity. Ultimately, CodeGuardian represents a pivot toward "agentic" security, where AI assistants act as proactive guardians of code integrity throughout the development lifecycle, effectively bridging the gap between rapid feature delivery and robust organizational compliance.

Daily Tech Digest - October 03, 2025


Quote for the day:

"Success is the progressive realization of a worthy goal or ideal." -- Earl Nightingale



AI And The End Of Progress? Why Innovation May Be More Fragile Than We Think

“If progress was inevitable, the first industrial revolution would have happened a lot earlier,” he explained in our recent conversation. “And if progress was inevitable, most countries around the world would be rich and prosperous today.” Many societies have seen periods of intense innovation followed by stagnation or collapse. Ancient cities such as Ephesus once thrived and then disappeared. The Soviet Union industrialized rapidly but failed to keep up when the computer era began. ... Artificial intelligence sits squarely at the center of this fragile transition. Early breakthroughs, from transformers to generative AI, came from open experimentation in universities and small labs. ... Many organizations are using AI primarily for process automation and cost-cutting. Frey believes this will not deliver transformative growth. “If AI means we do email and spreadsheets a bit more efficiently and ease the way we book travel, the transformation is not going to be on par with electricity or the internal combustion engine,” he said. True prosperity comes from creating new industries and doing previously inconceivable things. ... “If you want to thrive as a business in the AI revolution, you need to give people at low levels of the organization more decision-making autonomy to actually implement the improvements they are finding for themselves,” he said.


Why every manager should have trauma literacy

Trauma literacy is the ability to recognize that unhealed past experiences show up in daily behavior and to respond in ways that foster safety and resilience. You don’t need to know someone’s history to be mindful of trauma’s effects. You just need to assume that trauma exists, and that it may be shaping how people show up at work. ... Managers are trained in financial strategy, forecasting, and performance management. But few are trained to recognize the external manifestations of what I felt back in that tech office: the racing heart, the sense of dread, and the silent withdrawal. Most workers are taught to push harder instead of pausing to hold space for emotions. Emotions are messy, and it often feels safer to stick with technical tasks and leave feelings unaddressed. ... Once someone shares something vulnerable, don’t rush to fix it or dismiss it. Just reflect it back: “Thanks for sharing that, I hear you,” or “That makes a lot of sense.” From there, you might ask, “Is there anything you need from me today?” or “Would it help to adjust your workload this week?” ... Trauma literacy isn’t a one-off conversation; it’s a culture. Build in rituals for reflection, adjust workloads proactively, and allocate time and resources toward psychological safety. When resilience is designed into structures, managers don’t have to rely on intuition alone.


Botnets are getting smarter and more dangerous

They don’t stop at automation. Natural language processing can be used to generate convincing phishing emails at scale. Reinforcement learning lets malware adjust strategies based on firewall responses. Image recognition can help bots evade visual CAPTCHAs. These capabilities give attackers a terrifying new playbook, one that relies less on scale and more on sophistication. What makes this trend especially insidious is that botnets can now be smaller and stealthier than ever. Instead of infecting millions of devices to overwhelm a system, an AI-driven botnet might only need a few thousand nodes to carry out highly targeted, surgical operations. That makes detection harder, attribution fuzzier and mitigation more complex. ... A compromised software development kit or node package manager can serve as a delivery mechanism for an AI-powered botnet, enabling it to infiltrate thousands of businesses in a single attack. From there, the botnet doesn’t just wait for instructions; it scouts, learns and adapts. IOT devices remain another massive vulnerability. ... The regulatory angle is becoming more critical as well. As botnet sophistication grows, governments and commercial organizations are being forced to reconsider their cybercrime frameworks. The blurred line between AI research and weaponization is becoming a legal gray zone. Will training a model to bypass CAPTCHA become criminalized? What about selling an AI model that can autonomously scan for zero-day exploits?


From Spend to Strategy: A CISO's View

Company executives view cybersecurity as a core business risk, but CISOs must communicate risk in a similar capacity to other risk functions through heat maps. These heat maps communicate the likelihood of a security incident impacting what matters most to the business - which includes key business capabilities, critical systems and services, and core locations or facilities - and the materiality of such an impact. Using these heat maps, CISOs can and should show the progress made in terms of reducing incident likelihood and impact, the progress expected to be made over the coming reporting period, and gaps that require additional funding to reduce corresponding risks to an acceptable level. From a security spend perspective, this means explaining to leadership how the function will deliver better business outcomes, not only with more budget but also with reallocated funding that can help create better ROI. CISOs must be prepared to answer inbound questions, such as: Haven't we already invested in this? What are you able to deliver with 20% more budget for these new capabilities that you weren't able to deliver before? Staying away from highly technical metrics like vulnerability counts with no direct correlation to business risk must be avoided at all costs. It's about helping executives understand the progress being made and soon to be made, along with gaps tied to reducing risk related to what the business cares about most.


The Future of Data Center Security: What Businesses Must Know

Unlike in the past, when cyberattacks mainly targeted networks, today’s hackers combine online attacks with physical sabotage in what is known as the “dual-attack model.” For example, while a cybercriminal tries to breach a network firewall, another may attempt to disable equipment physically inside the data center building. This coordinated attack can cause far-reaching damage. ... Alongside security, power management is a top priority. Indian data centers face rising energy demands. Reports show rack power consumption is climbing steadily, especially for AI workloads. Mumbai and Hyderabad, leading India’s AI data center growth, are investing in advanced cooling technologies and reliable backup energy systems to ensure smooth operations and prevent downtime. Failures in cooling or power systems can cause major outages that result in millions in losses.  ... Cybersecurity experts also warn that more attacks today are concealed within encrypted network traffic, bypassing traditional firewalls. To counter this, Indian data centers are adopting tools that decrypt, inspect, and then re-encrypt data communications in real time. ... Indian companies must act decisively to implement next-generation security measures. Those that do will benefit from uninterrupted operations, stronger compliance, and gain a competitive edge in an increasingly digital economy.


4 ways to use time to level up your security monitoring

Most security events start small. You notice a few unusual logins, a traffic spike or abnormal activities in a certain system. Where raw log pipelines add parsing or enrichment delays before data is ready for analysis, time series arrives consistently structured and ready for immediate querying. This makes it easier to establish behavioral baselines and even apply statistical models like rolling averages and standard deviations to detect anomalies quickly. ... Detection is only half the battle. Time series systems handle low-latency ingest, allowing alerts and triggers to be fired in real-time as new data points arrive. When a device needs to be quarantined, access tokens revoked or an attacker’s behavior spun up into a forensics workflow to prevent lateral movement, it can do so in real-time. Because most SaaS log platforms batch and index events before they are fully queryable, SIEM-driven responses can lag by minutes, depending on configuration and data volume. Time series systems process data points in real-time, reducing that lag. ... SIEMs remain indispensable, and logs are foundational for investigations and compliance. High-precision time series, continuously ingested and analyzed, enables faster detection, longer retention and real-time response. All without the cost and performance tradeoffs of relying on logs alone.


The Leadership Style That’s Winning in the AI Era

Technology can generate ideas and reinforce existing thinking, but it cannot replace authentic human connection. Quiet leaders understand this instinctively: They build credibility through genuine relationships, not algorithms. These leaders share a common set of principles and practices that guide how they work and show up for their teams ... Respect grows when leaders admit their limitations, take responsibility for mistakes and remain grounded. Employees appreciate leaders who share when they don’t have all the answers and ask others to contribute to solutions. This kind of openness increases their credibility and influence. ... The best leaders treat all conversations as learning opportunities. A curious leader doesn’t jump to conclusions or cut discussions short. They ask thoughtful questions and listen actively, signaling to their teams that their input matters. This kind of curiosity encourages innovation and creates space for better ideas to surface. ... Rather than seeking credit, quiet leaders focus on building organizations that thrive beyond any one individual. They delegate, ensuring that their team can take real ownership of projects and celebrate success together. ... Leaders who engage in the day-to-day work of the business gain credibility and insight. Whether it’s walking the production floor or sitting on customer service calls, this engagement deepens the understanding of the business, the customer experience and the challenges team members face.


How autonomous businesses succeed by engaging with the world

Autonomous machines are designed from the outside in, while conventional machines are designed from the inside out. We are witnessing a fundamental shift in how successful systems are designed, and agentic AI sits at the heart of this revolution. Today, businesses are being designed more and more to resemble machines. ... For companies becoming autonomous machines, this outside-in orientation has profound implications for how they think about customers, markets, and value creation. Traditional companies are often internally focused. They design products based on their capabilities, organize around their processes, and optimize for efficiency. Customers are external entities who hopefully will want what the company produces. The company's internal logic, its org chart, processes, and systems become the center of attention, with customers orbiting around these internal priorities. ... Autonomous companies must be world-oriented rather than center-oriented. Customers represent the primary external environment they need to understand and respond to, but they're not a center to be served; they're part of a dynamic world to be engaged with. Just as a Tesla can't function without sophisticated environmental sensing, an autonomous company can't function without a deep, real-time understanding of customer needs, behaviors, and changing requirements.


Indian factories and automation: The ‘everything bagel’ is here

True competitiveness in manufacturing now hinges on integrating automation right from the design stage and not just on the assembly floor, indicates Krishnamoorthy. “By connecting CAD environments with robots friendly jigs, manufacturers can reduce programming times by 30 per cent, speeding up product launches and boosting agility in responding to market demands.” You can now walk around a plant inside your computer- thanks to the power of modelling technology. ... As attractive and revolutionary this advent of automation is, some holes still remain to be looked into. Like labor replacement, robot taxes, turbulence in brownfield facilities and accidents due to automation changing so much in the factories. Dai avers that automation may displace low-skill jobs but will address labor shortages. As to Robot taxes, they will become a norm in the long term amid the rise of robotics to balance innovation and social disruption. “Robotics governance is becoming increasingly critical to ensure security, privacy, ethics, and regulatory compliance.” He feels. ... “The future of robotics in manufacturing is about more than efficiency gains—it is about reshaping industrial culture, building resilience, and redefining global competitiveness. India, with its rapid adoption and supportive ecosystem, is not just catching up but positioning itself as a potential leader in this next era of intelligent manufacturing.” Captures Krishnamoorthy.


Old-school engineering lessons for AI app developers

Models keep getting smarter; apps keep breaking in the same places. The gap between demo and durable product remains the place where most engineering happens. How are development teams breaking the impasse? By getting back to basics. ... When data agents fail, they often fail silently—giving confident-sounding answers that are wrong, and it can be hard to figure out what caused the failure.” He emphasizes systematic evaluation and observability for each step an agent takes, not just end-to-end accuracy. ... The teams that win treat knowledge as a product. They build structured corpora, sometimes using agents to lift entities and relations into a lightweight graph. They grade their RAG systems like a search engine: on freshness, coverage, and hit rate against a golden set of questions. ... As Valdarrama quips, “Letting AI write all of my code is like paying a sommelier to drink all of my wine.” In other words, use the machine to accelerate code you’d be willing to own; don’t outsource judgment. In practice, this means developers must tighten the loop between AI-suggested diffs and their CI and enforce tests on any AI-generated changes, blocking merges on red builds ... And then there’s security, which in the age of generative AI has taken on a surreal new dimension. The same guardrails we put on AI-generated code must be applied to user input, because every prompt should be treated as potentially hostile.

Daily Tech Digest - April 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy



Critical Thinking In The Age Of AI-Generated Code

Besides understanding our code, code reviewing AI-generated code is an invaluable skill nowadays. Tools like GitHub's Copilot and DeepCode can code-review better than a junior software developer. Depending on the complexity of the codebase, they can save us time in code reviewing and pinpoint cases that we may have missed, but, after all, they are not flawless. We still need to verify that the AI assistant's code review did not provide any false positives or false negatives. We need to verify that the code review did not miss anything important and that the AI assistant got the context correctly. The hybrid approach seems to be the most effective one: let AI handle the grunt work and rely on developers for the critical analysis. ... After all, code reviewing AI-generated code is an excellent opportunity to educate ourselves while improving our code-reviewing skills. Keep in mind that, to date, AI-generated code optimizes for patterns in its training data. This may not be aligned with coding first principles. AI-generated code may follow templated solutions rather than custom designs. It may include unnecessary defensive code or overly generic implementations. We need to check that it has chosen the most appropriate solution for each code block generated. Another common problem is that LLMs may hallucinate.


DeepCoder: Revolutionizing Software Development with Open-Source AI

One of the DeepCoder project’s most significant contributions is the introduction of verl-pipeline, an optimized extension of the very open-source RLHF library. The team identified sampling, the generation of long token sequences as the primary bottleneck in training and developed “one-off pipelining” to address this challenge. This technique overlaps sampling, reward calculation and training, reducing end-to-end training times by up to 2.5x. This optimization is game-changing for coding tasks requiring thousands of unit tests per reinforcement learning iteration, making previously prohibitive training runs accessible to smaller research teams and independent developers. For DevOps professionals, DeepCoder represents an opportunity to integrate advanced code generation directly into CI/CD pipelines without dependency on API-gated services. Teams can fine-tune the model on their codebase, creating customized assistants that understand their specific architecture and coding patterns. ... DeepCoder’s open-source nature aligns with the DevOps collaboration and shared improvement philosophy. As more organizations adopt and contribute to the model, we can expect to see specialized versions emerge for different programming languages and problem domains.


Transforming Software Development

AI assistants are getting smarter, moving beyond prompt-based interactions to anticipate developers’ needs and proactively offer suggestions. This evolution is driven by the rise of AI agents, which can independently execute tasks, learn from their experiences and even collaborate with other agents. Next year, these agents will serve as a central hub for code assistance, streamlining the entire software development lifecycle. AI agents will autonomously write unit tests, refactor code for efficiency and even suggest architectural improvements. Developers’ roles will need to evolve alongside these advancements. AI will not replace them. Far from it; proactive AI assistants and their underlying agents will help developers build new skills and free up their time to focus on higher-value, more strategic tasks. ... AI models are more powerful when trained on internal company data, which allows them to generate insights specific to an organization’s unique operations and objectives. However, this often requires running models on premises for security and compliance reasons. With open source models rapidly closing the performance gap with commercial offerings, more businesses will deploy models on premises in 2025. This will allow organizations to fine-tune models with their own data and deploy AI applications at a fraction of the cost.


Cybercriminal groups embrace corporate structures to scale, sustain operations

We have seen cross collaboration between groups that specialize in specific activities. For example, one group specializes in social engineering, while another focuses on scaling malware and botnets to uncover open servers that yield database breaches. They, in turn, can sell access to those who focus on ransomware attacks. Recently, we have seen collaboration between AL/ML developers who scrape public records to build Org Charts, as well as lists of real estate holdings. This data is then used en masse with situational and location data to populate PDF attachments in emails that look like real invoices, with executives’ names in fake prior email responses, as part of the thread. ... the recent development in hackers organizing into larger groups has allowed the stakes to get even higher. Look at the Lazarus Group, who pulled off one of the largest heists ever by targeting Bybit and stealing $1.5 billion in Ethereum, as well as subsequently converting $300 million in unrecoverable funds. This group is likely state-sponsored and funding North Korean military programs. Therefore, understanding North Korean national interests will hint at future targets. The increasing scale of their attacks likely reflects greater resources allocated by North Korea, more sophisticated tooling and capabilities, lessons learned from previous operations, and a growing number of personnel trained in cyber operations.


Agentic AI might soon get into cryptocurrency trading — what could possibly go wrong?

Not everyone is bullish on the intersection of Web3, agentic AI and blockchain. Forrester Research vice president and principal analyst Martha Bennett is among those who are skeptical. In 2023, she co-authored an online post critical of Worldcoin, now the World project, and her opinion hasn’t changed in several regards. World project still faces major challenges, including privacy issues and concerns about its iris biometric technology, she said. And Agentic AI is still in its early stages and not yet capable of supporting Web3 transactions. Most current generative AI (genAI) tools, including LLMs, lack the autonomy defined as “agentic AI.” “There’s no AI technology today that would be able automate Web3 transactions in a reliable and secure manner,” she said. Given the risks and the potential for exploitation, it’s too soon to rely on AI systems with high autonomy for Web3 transactions. She did note, however, that Web3 already uses automation through smart contracts — self-executing electronic contracts with the terms of the agreement directly written into code. “Will Web3 go mainstream in 2025? My overall answer is no, but there are nuances,” she said. “If mainstream means mass consumer adoption, it’s a definite no. There’s simply not enough utility there for consumers.” Web3, Bennett said, is largely a self-contained financial ecosystem, and efforts to boost adoption through Decentralized Physical Infrastructure Networks (DePIN), such as Tools for Humanity’s, haven’t led to major breakthroughs.


Artificial Intelligence fuels rise of hard-to-detect bots 

“The surge in AI-driven bot creation has serious implications for businesses worldwide,” said Tim Chang, General Manager of Application Security at Thales. “As automated traffic accounts for more than half of all web activity, organisations face heightened risks from bad bots, which are becoming more prolific every day.” ... “This year’s report sheds light on the evolving tactics and techniques utilised by bot attackers. What were once deemed advanced evasion methods have now become standard practice for many malicious bots,” Chang said. “In this rapidly changing environment, businesses must evolve their strategies. It’s crucial to adopt an adaptive and proactive approach, leveraging sophisticated bot detection tools and comprehensive cybersecurity management solutions to build a resilient defense against the ever-shifting landscape of bot-related threats.” ... Analysis in the report reveals a deliberate strategy by cyber attackers to exploit API endpoints that manage sensitive and high-value data. Implications of this trend are especially impactful for industries that rely on APIs for their critical operations and transactions. Financial services, healthcare, and e-commerce sectors are bearing the brunt of these sophisticated bot attacks, making them prime targets for malicious actors seeking to breach sensitive information.


Humans at the helm of an AI-driven grid

A growing number of utilities are turning to AI-based tools to process vast data streams and streamline tasks once managed by manual calculation. For instance, algorithms can analyse weather patterns, historical consumption, and real-time sensor readings to make more accurate power demand and renewable energy generation forecasts. This supports more efficient balancing of supply and demand, reducing the likelihood of overloaded transformers or unexpected brownouts. Some utilities are also exploring AI-driven alarm management, which can filter the flood of alerts triggered by a network issue. Instead of operators sifting through hundreds of notifications, AI tools can be used to identify and highlight the most critical issues in real time. Another AI application is with congestion management, detecting trouble spots on the grid where demand might exceed capacity and even propose rerouting strategies to keep electricity flowing reliably. While still in their early stages, AI tools hold promise for driving operational efficiency in many daily scenarios. ... Even the smartest algorithm, however, lacks the broader perspective and accountability that people bring to grid management. Power and Utility companies are tasked with a public service mandate: they must ensure safety, affordability, and equitable access to electricity.


CISO Conversations: Maarten Van Horenbeeck, SVP & CSO at Adobe

The digital divide is simple to understand but complex to solve. Fundamentally, it separates those who have access to cyber and cyber knowledge from those who do not. There are areas of the world and socio-economic groups or demographics who have little or very limited access to the internet, and consequently very little awareness of cybersecurity. But cyber and cyber threats are worldwide; and technology is increasingly integrated and interconnected globally. “Cyber issues emanating from the digital divide don’t just play out far away from our homes – they play out very close to our homes as well,” warns Van Horenbeeck. “There’s a huge divide between people who know, for example, not to reuse passwords, to use multi factor authentication, and those individuals that have none of that experience at all.” In effect the digital divide creates a largely invisible and unseen threat surface for the long-connected world. He believes that technology companies can play a part in solving this problem by making cybersecurity features easy to understand and use. and cites two examples of the Adobe approach. “We invested, for example, in support for passkeys because we feel it’s a more effective and easier method of authentication that is also more secure.”


How AI, Robotics and Automation Transform Supply Chains

Enterprises designing robots to augment the human workforce need to take design thinking and ergonomic approaches into consideration. Designers must think about how robots comprehend and understand their physical surroundings without tripping over cables or objects on the floor, obstructing movement or causing human injuries. These robots are created with the aim to collaborate with humans for repetitive tasks and lift heavy loads. Last year, OT.today featured stories on how humanoid robots augmented the human workforce at Amazon, Mercedes, NASA and the Piaggio Group. In 2017, Alibaba invested in AI labs and the DAMO Academy. At its flagship Computing Conference in 2018, held in Hangzhou, China, Alibaba showcased a range of robots designed for warehouses, autonomous deliveries and other sectors, including hospitality and pharmaceuticals. More recently, Alibaba invested in LimX Dynamics, a company specializing in humanoid and robotic technology. Japanese automobile manufacturers have been using industrial robots since the early 1980s. Chip manufacturing companies in Taiwan and other countries also use them. Robots assist in surgeries in the healthcare sector. But none of those early manufacturing robots resembled humanoids or even had advanced AI seen in today's robots.


CIOs are overspending on the cloud — but still think it’s worth it

CIOs should also embrace DevOps practices tied to cost reduction when consuming cloud resources, Sellers says. One pitfall that doesn’t get enough attention: Many organizations don’t educate developers on the cost of cloud services, despite the glut of developer services large cloud providers make trivial to call. “I’ve lost track of how many services Amazon provides that developers can just use, and some of those can be quite expensive, but a developer doesn’t really know that,” Sellers says. “They’re like, ‘Instead of writing my own solution to this, I can just call this service that Amazon already provides, and boom, my job is done.’” The disconnect between developers and financial factors in the cloud is a real problem that leads to increased cloud costs, adds Nick Durkin, field CTO at Harness, provider of an AI-driven software development platform. Without knowing the costs of accessing a cloud-based GPU or CPU, for example, a developer is like a home builder who doesn’t know the cost of wood or brick, Durkin says. “If you’re not giving your smartest engineers access to the information about services that they can optimize on, how would you expect them to do it?” he says. “Then, finance comes back a month later with a beating stick.”

Daily Tech Digest - October 15, 2024

The NHI management challenge: When employees leave

Non-human identities (NHIs) support machine-to-machine authentication and access across software infrastructure and applications. These digital constructs enable automated processes, services, and applications to authenticate and perform tasks securely, without direct human intervention. Access is granted to NHIs through various types of authentications, including secrets such as access keys, certificates and tokens. ... When an employee exits, secrets can go with them. Those secrets – credentials, NHIs and associated workflows – can be exfiltrated from mental memory, recorded manually, stored in vaults and keychains, on removable media, and more. Secrets that have been exfiltrated are considered “leaked.” ... An equally great risk is that employees, especially developers, create, deploy and manage secrets as part of software stacks and configurations, as one-time events or in regular workflows. When they exit, those secrets can become orphans, whose very existence is unknown to colleagues or to tools and frameworks. ... The lifecycle of NHIs can stretch beyond the boundaries of a single organization, encompassing partners, suppliers, customers and other third parties. 


How Ernst & Young’s AI platform is ‘radically’ reshaping operations

We’re seeing a new wave of AI roles emerging, with a strong focus on governance, ethics, and strategic alignment. Chief AI Officers, AI governance leads, knowledge engineers and AI agent developers are becoming critical to ensuring that AI systems are trustworthy, transparent, and aligned with both business goals and human needs. Additionally, roles like AI ethicists and compliance experts are on the rise, especially as governments begin to regulate AI more strictly. These roles go beyond technical skills — they require a deep understanding of policy, ethics, and organizational strategy. As AI adoption grows, so too will the need for individuals who can bridge the gap between the technology and the focus on human-centered outcomes.” ... Keeping humans at the center, especially as we approach AGI, is not just a guiding principle — it’s an absolute necessity. The EU AI Act is the most developed effort yet in establishing the guardrails to control the potential impacts of this technology at scale. At EY, we are rapidly adapting our corporate policies and ethical frameworks in order to, first, be compliant, but also to lead the way in showing the path of responsible AI to our clients.


The Truth Behind the Star Health Breach: A Story of Cybercrime, Disinformation, and Trust

The email that xenZen used as “evidence” was forged. The hacker altered the HTML code of an email using the common “inspect element” function—an easy trick to manipulate how a webpage appears. This allowed him to make it seem as though the email came directly from the CISO’s official account. ... XenZen’s attack demonstrates how cybercriminals are evolving. They are using psychological warfare to create chaos. In this case, xenZen not only exploited a vulnerability but also fabricated evidence to frame the CISO. The security community needs to stay vigilant and anticipate attacks that may target not just systems but also individuals and organizations through disinformation. ... Making the CISO a scapegoat for security breaches without proper evidence is a growing concern. Organizations must understand the complexities of cybersecurity and avoid jumping to conclusions. Security teams should have the support they need, including legal protection and clear communication channels. Transparency is essential, but so is the careful handling of internal investigations before pointing fingers.


How CIOs and CTOs Are Bridging Cross-Functional Collaboration

Ashwin Ballal, CIO at software company Freshworks, believes that the organizations that fail to collaborate well across departments are leaving money on the table. “Siloed communications create inefficiencies, leading to duplicative work, poor performance, and a negative employee experience. In my experience as a CIO, prioritizing cross-departmental communication has been essential to overcoming these challenges,” says Ballal. His team continually reevaluates the tech stack, collaborating with leaders and users to confirm that the organization is only investing in software that adds value. This approach saves money and helps keep employees engaged by minimizing their interactions with outdated technology. He also uses employees as product beta testers, and their feedback impacts the product roadmap. ... “My recommendation for other CIOs and CTOs is to regularly meet with departmental leaders to understand how technology interacts across the organization. Sending out regular surveys can yield candid feedback on what’s working and what isn’t. Additionally fostering an environment where employees can experiment with new technologies encourages innovation and problem-solving.”


2025 Is the Year of AI PCs; Are Businesses Onboard?

With the rise of real-time computing needs and the proliferation of IoT devices, businesses are realizing the need to move AI closer to where the data is - at the edge. This is where AI PCs come into play. Unlike their traditional counterparts, AI PCs are integrated with neural processing units, NPUs, that enable them to handle AI workloads locally, reducing latency and providing a more secure computing environment. "The anticipated surge in AI PCs is largely due to the supply-side push, as NPUs will be included in more CPU vendor road maps," said Ranjit Atwal, senior research director analyst at Gartner. NPUs allow enterprises to move from reactive to proactive IT strategies. Companies can use AI PCs to predict IT infrastructure failures before they happen, minimizing downtime and saving millions in operational costs. NPU-integrated PCs also allow enterprises to process AI-related tasks, such as machine learning, natural language processing and real-time analytics, directly on the device without relying on cloud-based services. And with generative AI becoming part of enterprise technology stacks, companies investing in AI PCs are essentially future-proofing their operations, preparing for a time when gen AI capabilities become a standard part of business tools.


Australia’s Cyber Security Strategy in Action – Three New Draft Laws Published

Australia is following in the footsteps of other jurisdictions such as the United States by establishing a Cyber Review Board. The Board’s remit will be to conduct no-fault, post-incident reviews of significant cyber security incidents in Australia. The intent is to strengthen cyber resilience, by providing recommendations to Government and industry based on lessons learned from previous incidents. Limited information gathering powers will be granted to the Board, so it will largely rely on cooperation by impacted businesses. ... Mandatory security standards for smart devices - The Cyber Security Bill also establishes a framework under which mandatory security standards for smart devices will be issued. Suppliers of smart devices will be prevented from supplying devices which do not meet these security standards, and will be required to provide statements of compliance for devices manufactured in Australia or supplied to the Australian market. The Secretary of Home Affairs will be given the power to issue enforcement notices (including compliance, stop and recall notices) if a certificate of compliance for a specific device cannot be verified.


The Role of Zero Trust Network Access Tools in Ransomware Recovery

By integrating with existing identity providers, Zero Trust Network Access ensures that only authenticated and authorized users can access specific applications. This identity-driven approach, combined with device posture assessments and real-time threat intelligence, provides a robust defense against unauthorized access during a ransomware recovery. Moreover, ZTNA’s application-layer security means that even if a user’s credentials are compromised, the attacker would only gain access to specific applications rather than the entire network. This granular access control is crucial in containing ransomware attacks and preventing lateral movement across the network. ... As a cloud-native solution, ZTNA can easily scale to meet the demands of organizations of all sizes, from small businesses to large enterprises. This scalability is particularly valuable during a ransomware recovery, where the need for secure access may fluctuate based on the number of systems and users involved. ZTNA’s flexibility also allows it to integrate with various IT environments, including hybrid and multi-cloud infrastructures. This adaptability ensures that organizations can deploy ZTNA without the need for significant changes to their existing setups, making it an ideal solution for dynamic environments.


What Is Server Consolidation and How Can It Improve Data Center Efficiency?

Server consolidation is the process of migrating workloads from multiple underutilized servers into a smaller collection of servers. ... although server consolidation typically focuses on consolidating physical servers, it can also apply to virtual servers. For instance, if you have five virtual hosts running on the same physical server, you might consolidate them into just three or virtual hosts. Doing so would reduce the resources wasted on hypervisor overhead, allowing you to maximize the return on investment from your server hardware. ... To determine whether server consolidation will reduce energy usage, you’ll have to calculate the energy needs of your servers. Typically, power supplies indicate how many watts of electricity they supply to servers. Using this number, you can compare how energy requirements vary between machines. Keep in mind, however, that actual energy consumption will vary depending on factors like CPU clock speed and how active server CPUs are. So, in addition to comparing the wattage ratings on power supplies, you should track how much electricity your servers actually consume, and how that metric changes before and after you consolidate servers.


How DDoS Botent is used to Infect your Network?

The threat posed by DDoS botnets remains significant and complex. As these malicious networks grow more sophisticated, understanding their mechanisms and potential impacts is crucial for organizations. DDoS botnets not only facilitate financial theft and data breaches but also enable large-scale spam and phishing campaigns that can undermine trust and security. To effectively defend against these threats, organizations must prioritize proactive measures, including regular updates, robust security protocols, and vigilant monitoring of network activity. By implementing strategies to identify and mitigate botnet attacks, businesses can safeguard their systems and data from potential harm. Ultimately, a comprehensive understanding of how DDoS botnets operate—and the strategies to combat them—will empower organizations to navigate the challenges of cybersecurity and maintain a secure digital environment. As a CERT-In empanelled organization, Kratikal is equipped to enhance your understanding of potential risks. Our manual and automated Vulnerability Assessment and Penetration Testing (VAPT) services proficiently discover, detect, and assess vulnerabilities within your IT infrastructure. 


Banks Must Try the Flip Side of Embedded Finance: Embedded Fintech

With a one-way-street perspective on embedded finance, the idea is that if payment volume is moving to tech companies then banks should power the back end of the tech experience. This is a good start but the threat from fintech companies to retail banks will only continue to deepen in the future. Customer adoption is higher than ever for some fintechs like Chime and Nubank, for example. A better approach would be for banks to use embedded fintech to improve customer experience by upgrading banks’ tech offerings to retain customers and grow within their customer base. Embedded fintech can help these organizations stay competitive technologically. ... There are many opportunities for innovation with embedded payroll. Banks are uniquely positioned to offer tailored payroll solutions that map to what small businesses today want. Payroll is complex and needs to be compliant to avoid hefty penalties. Embedded payroll lets banks offload costs, burdens and risks associated with payroll. Banks can offer faster payroll with less risk when they hold the accounts for employers and payees. They can also give business customers a fuller picture of their cash flow, offering them peace of mind. 



Quote for the day:

"Pull the string and it will follow wherever you wish. Push it and it will go nowhere at all." -- Dwight D. Eisenhower

Daily Tech Digest - March 10, 2024

What’s the privacy tax on innovation?

A few decades ago, California had one of the strongest definitions for certifying Organic foods in the US. Eventually, the US government stepped in with a watered-down definition. Despite the pain of new privacy controls, the US data broker industry will lobby for a similar approach to at least harmonize privacy regulations at the Federal level that limit the impact on their business models when operating across state lines. For businesses and consumers, a more equitable approach would be to add a few more teeth to the cost of data misuse arising from legal sales, employee theft, or breaches. A few high-profile payouts arising from theft or when this data is used as part of multi-million dollar ransomware attacks on critical business systems would have a focusing effect on better privacy management practices. Another option is to turn to banks as holders of trust. Banks may be a good first point for managing the financial data we directly share with them. But what about all the data that others gather that may not be tied to traditional identifiers like social security numbers (SSN) used to unify data, such as IP addresses, phone numbers, Wi-Fi hubs, or the trail of GPS dots that gravitate to your home or office?


Living with the ghost of a smart home’s past

There were the window shades that always opened at 8AM and always closed at sundown. My brother disconnected everything that looked like a hub, and still, operating on some inaccessible internal clock, the shades carried on as they were once programmed to do. ... This is the state of home ownership in 2024! People have been making their homes smart with off-the-shelf parts for well over a decade now. Sometimes they sell those homes, and the new homeowners find themselves mired in troubleshooting when they should be trying to pick out wall colors. Some former homeowners will provide onboarding to the home’s smart home system, but most do as the guy who used to own my brother’s house did. They walk away and leave it as an adventure for the next person. ... I really hope the new renters of my old Brooklyn walk-up appreciate all the 2014 Philips Hue lights I left installed in the basement. There’s a calculus you make as you’re moving. It’s a hectic time, and there’s a lot to be done. Do you want to spend half the day freeing all those Hue bulbs from their obnoxious and broken recessed light housings, or do you want to leave a potential gift for the next homeowner and get started on nesting in your new place? 


Overcoming the AI Privacy Predicament

According to one study by Brookings, while 57% of consumers felt that AI will have a net negative impact on privacy, 34% were unsure about how AI would affect their privacy. Indeed, AI evokes a mixed set of thoughts and emotions in consumers. For most people, the promise of AI is clear: from increasing efficiency, to automating mundane tasks and freeing up more time for creative work, to improving outcomes in areas such as healthcare and education. ... In the realm of AI, the lack of trust is significant. Indeed, 81% of consumers think the information collected by AI companies will be used in ways people are uncomfortable with, as well as in ways that were not originally intended. That consumers are put in a seemingly impossible predicament regarding their privacy leaves them little choice but to a.) consent, or b.) forgo use of the product or service. Both choices leave consumers wanting more from the digital economy. When a new technology has negative implications for privacy, consumers have shown they are willing to engage in privacy-protective behaviors, such as deleting an app, withholding personal information, or abandoning an online purchase altogether.


How Static Analysis Can Save Your Software

While static analysis is a means of pattern detection, fixing an actual bug (for example, dereferencing a null pointer) is much harder, albeit possible. It becomes mathematically difficult to track exponentially increasing possible states. We call this “path explosion.” Say you’re writing code that, given two integers, divides one by the other, and there are various failure modes depending on the integers’ values. But what if the denominator is zero? That results in undefined behavior, and it means you need to look at where those integers came from, their possible values and what branches they took along the way. If you can see that the denominator is checked against zero before the division — and branches away if it is — you should be safe from division-by-zero issues. This theoretical stepping through stages of code is called “symbolic execution.” It’s not too complicated if the checkpoint is fairly close to the division process, but the further away it gets, the more branches you must account for. Crossing the function boundary gets even trickier. But once you have calls from other translation units, the problem becomes intractable in the general case. 


Avoiding Shift Left Exhaustion – Part 1

Shift left requires developers to be involved in testing, quality assurance, and collaboration throughout the development cycle. While this is undoubtedly beneficial for the final product, it can lead to an increased workload for developers who must balance their coding responsibilities with testing and problem-solving tasks. ... Adapting to Shift left practices often requires developers to acquire new skills and stay current with the latest testing methodologies and tools. This continuous learning can be intellectually stimulating and exhausting, especially in an industry that evolves rapidly. Developers must understand new tools, processes, and technologies as more things get moved earlier in the development lifecycle. ... The added pressure of early and continuous testing and the demand for faster development cycles can lead to developer burnout. When developers are overburdened, their creativity and productivity may suffer, ultimately impacting the software quality they produce. ... Shifting testing and quality assurance left in the development process may impose strict time constraints. Developers may feel pressured to meet tight deadlines, which can be stressful and lead to rushed decision-making, potentially compromising the software’s quality.


Ransomware Attacks on Critical Infrastructure Are Surging

Especially under fire are critical services. Healthcare and public health agencies dominated, filing 249 reports to IC3 last year over ransomware attacks, followed by 218 reports from critical manufacturing and 156 from government facilities. Ransomware-wielding attackers are potentially targeting these sectors most because they perceive the victims as having a proclivity to pay, given the risk to life or essential business processes posed by their systems being disrupted. Last year, IC3 received a ransomware report from at least one victim in all of the 16 critical infrastructure sectors - which include financial services, food and agriculture, energy and communications - except for two: dams and nuclear reactors, materials and waste. The ransomware group tied to the largest number of successful attacks against critical infrastructure reported to IC3 last year was LockBit, followed by Alphv/BlackCat, Akira, Royal and Black Basta. Law enforcement recently disrupted Alphv/BlackCat, as well as LockBit, after which each group separately claimed to have rebooted before appearing to go dark. 


What’s the missing piece for mainstream Web3 adoption?

Today’s Web3 lacks a unifying ecosystem, causing the market to fracture into multiple, independently evolving use cases. Crypto enthusiasts have to use various decentralized applications (DApps) and platforms to perform multiple transactions and interact with the different sectors of Web3. However, this isn’t a sustainable growth model for the Web3 industry and is more of a deterrent rather than a benefit when it comes to crypto adoption. ... Recognizing the need for a more integrated approach, some Web3 players are moving beyond the hype. Legion Network is emerging as a notable example among these. As a one-stop shop for Web3, Legion Network addresses the complexity of the industry and reaches new audiences. It brings together essential Web3 use cases, including a proprietary crypto wallet with comprehensive portfolio tracking, DeFi swaps and bridges, engaging play-to-earn/win games, captivating quests with prize rewards, a launchpad for emerging projects and a unique SocialFi experience that fosters community engagement.


What’s Driving Changes in Open Source Licensing?

In response to the challenges posed by cloud computing, some vendor-driven open source projects have changed their licenses or their GTM models. For example, MongoDB, Elastic, Confluent, Redis Labs and HashiCorp have adopted new licenses that restrict the use of their software-as-a-service by third parties or require them to pay fees or share their modifications. These changes are intended to protect the revenue and sustainability of the original vendors and to ensure that they can continue to invest in the open source project. However, these changes have also caused some controversy and backlash from the user community, who may feel that the project is becoming less open and more proprietary or that they are losing some of the benefits and freedoms of open source. However, community-driven open source projects have largely maintained their permissive licenses and their collaborative approach. These projects still benefit from the diversity and scale of their user community, who contribute to the development, maintenance, support and security of the software. These projects also leverage the support of organizations and foundations, such as the Linux Foundation, the Apache Software Foundation and the CNCF, who provide governance, funding and infrastructure. 


Botnets: The uninvited guests that just won’t leave

Reducing response time is vital. The longer the dwell time, the more likely it is that botnets can impact a business, particularly given that botnets can spread across many devices in a short period. How can security teams improve detection processes and shrink the time it takes to respond to malicious activity? Security practitioners should have multiple tools and strategies at their disposal to protect their organization’s networks against botnets. An obvious first step is to prevent access to all recognized C2 databases. Next, leverage application control to restrict unauthorized access to your systems. Additionally, use Domain Name System (DNS) filtering to target botnets explicitly, concentrating on each category or website that might expose your system to them. DNS filtering also helps to mitigate the Domain Generation Algorithms that botnets often use. Monitoring data while it enters and leaves devices is vital as well, as you can spot botnets as they attempt to infiltrate your computers or those connected to them. This is what makes security information and event management technology paired with malicious indicators of compromise detections so critical to protecting against bots. 


Are You Ready to Protect Your Company From Insider Threats? Probably Not

The real problem is that employees and employers don’t trust each other. This is an enormous risk for employees, as this environment makes it more likely that insider threats, security risks that originate from within the company, will emerge or intensify when tensions are high and motivations, including financial strain, dissatisfaction or desperation, drive individuals to act against their own organization. That’s the bad news. The worst news is that most companies are unprepared to meet the moment. ... Insider threats often betray their motivation. Sometimes, they tell colleagues about their intentions. Other times, their actions speak louder than words, as attempts to work around security protocols, active resentment for coworkers or leadership or general job dissatisfaction can be a red flag that an insider threat is about to act. Explaining the impact of human intelligence, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) writes, “An organization’s own personnel are an invaluable resource to observe behaviors of concern, as are those who are close to an individual, such as family, friends, and coworkers.”



Quote for the day:

"Leaders must be close enough to relate to others, but far enough ahead to motivate them." -- John C. Maxwell