Showing posts with label ShiftLeft. Show all posts
Showing posts with label ShiftLeft. Show all posts

Daily Tech Digest - February 11, 2025


Quote for the day:

"Your worth consists in what you are and not in what you have." -- Thomas Edison


Protecting Your Software Supply Chain: Assessing the Risks Before Deployment

Given the vast number of third-party components used in modern IT, it's unrealistic to scrutinize every software package equally. Instead, security teams should prioritize their efforts based on business impact and attack surface exposure. High-privilege applications that frequently communicate with external services should undergo product security testing, while lower-risk applications can be assessed through automated or less resource-intensive methods. Whether done before deployment or as a retrospective analysis, a structured approach to PST ensures that organizations focus on securing the most critical assets first while maintaining overall system integrity. ... While Product Security Testing will never prevent a breach of a third party out of your control, it is necessary to allow organizations to make informed decisions about their defensive posture and response strategy. Many organizations follow a standard process of identifying a need, selecting a product, and deploying it without a deep security evaluation. This lack of scrutiny can leave them scrambling to determine the impact when a supply chain attack occurs. By incorporating PST into the decision-making process, security teams gain critical documentation, including dependency mapping, threat models, and specific mitigations tailored to the technology in use. 


Google’s latest genAI shift is a reminder to IT leaders — never trust vendor policy

Entities out there doing things you don’t like are always going to be able to get generative AI (genAI) services and tools from somebody. You think large terrorist cells can’t use their money to pay somebody to craft LLMs for them? Even the most powerful enterprises can’t stop it from happening. But, that may not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and Toyota and the rest of those heavy-hitters merely want to pick and choose where their monies are spent. Big enterprises can’t stop AI from being used to do things they don’t like, but they can make sure none of it is being funded with their money. If they add a clause to every RFP that they will only work with model-makers that agree to not do X, Y, or Z, that will get a lot of attention. The contract would have to be realistic, though. It might say, for instance, “If the model-maker later chooses to accept payments for the above-described prohibited acts, they must reimburse all of the dollars we have already paid and must also give us 18 months notice so that we can replace the vendor with a company that will respect the terms of our contracts.” From the perspective of Google, along with Microsoft, OpenAI, IBM, AWS and others, the idea is to take enterprise dollars on top of government contracts. 


Is Fine-Tuning or Prompt Engineering the Right Approach for AI?

It’s not just about having access to GPUs — it’s about getting the most out of proprietary data with new tools that make fine-tuning easier. Here’s why fine-tuning is gaining traction:Better results with proprietary data: Fine-tuning allows businesses to train models on their own data, making the AI much more accurate and relevant to their specific tasks. This leads to better outcomes and real business value. Easier than ever before: Tools like Hugging Face’s Open Source libraries, PyTorch and TensorFlow, along with cloud services, have made fine-tuning more accessible. These frameworks simplify the process, even for teams without deep AI expertise. Improved infrastructure: The rising availability of powerful GPUs and cloud-based solutions has made it much easier to set up and run fine-tuning at scale. While fine-tuning opens the door to more customized AI, it does require careful planning and the right infrastructure to succeed. ... As enterprises accelerate their AI adoption, choosing between prompt engineering and fine-tuning will have a significant impact on their success. While prompt engineering provides a quick, cost-effective solution for general tasks, fine-tuning unlocks the full potential of AI, enabling superior performance on proprietary data.


Shifting left without slowing down

On the one hand, automation enabled by GenAI tools in software development is driving unprecedented developer productivity, further emphasizing the gap created by manual application security controls, like security reviews or threat modeling. But in parallel, recent advancements in code understanding enabled by these technologies, together with programmatic policy-as-code security policies, enable a giant leap in the value security automation can bring. ... The first step is recognizing security as a shared responsibility across the organization, not just a specialized function. Equipping teams with automated tools and clear processes helps integrate security into everyday workflows. Establishing measurable goals and metrics to track progress can also provide direction and accountability. Building cross-functional collaboration between security and development teams sets the foundation for long-term success. ... A common pitfall is treating security as an afterthought, leading to disruptions that strain teams and delay releases. Conversely, overburdening developers with security responsibilities without proper support can lead to frustration and neglect of critical tasks. Failure to adopt automation or align security goals with development objectives often results in inefficiency and poor outcomes. 


How To Approach API Security Amid Increasing Automated Attack Sophistication

We’ve now gone from ‘dumb’ attacks—for example, web-based attacks focused on extracting data from third parties and on a specific or single vulnerability—to ‘smart’ AI-driven attacks often involving picking an actual target, resulting in a more focused attack. Going after a particular organization, perhaps a large organization or even a nation-state, instead of looking for vulnerable people is a significant shift. The sophistication is increasing as attackers manipulate request payloads to trick the backend system into an action. ... Another element of API security is being aware of sensitive data. Personal Identifiable Information (PII) is moving through APIs constantly and is vulnerable to theft or data exfiltration. Organizations do not often pay attention to vulnerabilities. Still, they pay attention when the result is damage to their organization through leaked PII, stolen finances, or brand reputation. ... The security teams know the network systems and the infrastructure well but don't understand the application behaviors. The DevOps team tends to own the applications but doesn’t see anything in production. This split boundary in most organizations makes it ripe for exploitation. Many data exfiltration cases fall in this no man’s land since an authenticated user executes most incidents.


Top 5 ways attackers use generative AI to exploit your systems

Gen AI tools help criminals pull together different sources of data to enrich their campaigns — whether this is group social profiling, or targeted information gleaned from social media. “AI can be used to quickly learn what types of emails are being rejected or opened, and in turn modify its approach to increase phishing success rate,” Mindgard’s Garraghan explains. ... The traditionally difficult task of analyzing systems for vulnerabilities and developing exploits can be simplified through use of gen AI technologies. “Instead of a black hat hacker spending the time to probe and perform reconnaissance against a system perimeter, an AI agent can be tasked to do this automatically,” Mingard’s Garraghan says. ... “This sharp decrease strongly indicates that a major technological advancement — likely GenAI — is enabling threat actors to exploit vulnerabilities at unprecedented speeds,” ReliaQuest writes. ... Check Point Research explains: “While ChatGPT has invested substantially in anti-abuse provisions over the last two years, these newer models appear to offer little resistance to misuse, thereby attracting a surge of interest from different levels of attackers, especially the low skilled ones — individuals who exploit existing scripts or tools without a deep understanding of the underlying technology.”


Why firewalls and VPNs give you a false sense of security

VPNs and firewalls play a crucial role in extending networks, but they also come with risks. By connecting more users, devices, locations, and clouds, they inadvertently expand the attack surface with public IP addresses. This expansion allows users to work remotely from anywhere with an internet connection, further stretching the network’s reach. Moreover, the rise of IoT devices has led to a surge in Wi-Fi access points within this extended network. Even seemingly innocuous devices like Wi-Fi-connected espresso machines, meant for a quick post-lunch pick-me-up, contribute to the proliferation of new attack vectors that cybercriminals can exploit. ... More doesn’t mean better when it comes to firewalls and VPNs. Expanding a perimeter-based security architecture rooted in firewalls and VPNs means more deployments, more overhead costs, and more time wasted for IT teams – but less security and less peace of mind. Pain also comes in the form of degraded user experience and satisfaction with VPN technology for the entire organization due to backhauling traffic. Other challenges like the cost and complexity of patch management, security updates, software upgrades, and constantly refreshing aging equipment as an organization grows are enough to exhaust even the largest and most efficient IT teams.


Building Trust in AI: Security and Risks in Highly Regulated Industries

AI hallucinations have emerged as a critical problem, with systems generating plausible but incorrect information - for instance, AI fabricated software dependencies, such as PyTorture, leading to potential security risks. Hackers could exploit these hallucinations by creating malicious components masquerading as real ones. In another case, an AI libelously fabricated an embezzlement claim, resulting in legal action - marking the first time AI was sued for libel. Security remains a pressing concern, particularly with plugins and software supply chains. A ChatGPT plugin once exposed sensitive data due to a flaw in its OAuth mechanism, and incidents like PyTorch’s vulnerable release over Christmas demonstrate the risks of system exploitation. Supply chain vulnerabilities affect all technologies, while AI-specific threats like prompt injection allow attackers to manipulate outputs or access sensitive prompts, as seen in Google Gemini. ... Organizations can enhance their security strategies by utilizing frameworks like Google’s Secure AI Framework (SAIF). These frameworks highlight security principles, including access control, detection and response systems, defense mechanisms, and risk-aware processes tailored to meet specific business needs.


When LLMs become influencers

Our ability to influence LLMs is seriously circumscribed. Perhaps if you’re the owner of the LLM and associated tool, you can exert outsized influence on its output. For example, AWS should be able to train Amazon Q to answer questions, etc., related to AWS services. There’s an open question as to whether Q would be “biased” toward AWS services, but that’s almost a secondary concern. Maybe it steers a developer toward Amazon ElastiCache and away from Redis, simply by virtue of having more and better documentation and information to offer a developer. The primary concern is ensuring these tools have enough good training data so they don’t lead developers astray. ... Well, one option is simply to publish benchmarks. The LLM vendors will ultimately have to improve their output or developers will turn to other tools that consistently yield better results. If you’re an open source project, commercial vendor, or someone else that increasingly relies on LLMs as knowledge intermediaries, you should regularly publish results that showcase those LLMs that do well and those that don’t. Benchmarking can help move the industry forward. By extension, if you’re a developer who increasingly relies on coding assistants like GitHub Copilot or Amazon Q, be vocal about your experiences, both positive and negative. 


Deepfakes: How Deep Can They Go?

Metaphorically, spotting deepfakes is like playing the world’s most challenging game of “spot the difference.” The fakes have become so sophisticated that the inconsistencies are often nearly invisible, especially to the untrained eye. It requires constant vigilance and the ability to question the authenticity of audiovisual content, even when it looks or sounds completely convincing. Recognizing threats and taking decisive actions are crucial for mitigating the effects of an attack. Establishing well-defined policies, reporting channels, and response workflows in advance is imperative. Think of it like a citywide defense system responding to incoming missiles. Early warning radars (monitoring) are necessary to detect the threat; anti-missile batteries (AI scanning) are needed to neutralize it; and emergency services (incident response) are essential to quickly handle any impacts. Each layer works in concert to mitigate harm. ... If a deepfake attack succeeds, organizations should immediately notify stakeholders of the fake content, issue corrective statements, and coordinate efforts to remove the offending content. They should also investigate the source, implement additional verification measures, and provide updates to rebuild trust and consider legal action. 


Daily Tech Digest - November 06, 2024

Enter the ‘Whisperverse’: How AI voice agents will guide us through our days

Within the next few years, an AI-powered voice will burrow into your ears and take up residence inside your head. It will do this by whispering guidance to you throughout your day, reminding you to pick up your dry cleaning as you walk down the street, helping you find your parked car in a stadium lot and prompting you with the name of a coworker you pass in the hall. It may even coach you as you hold conversations with friends and coworkers, or when out on dates, give you interesting things to say that make you seem smarter, funnier and more charming than you really are. ... Most of these devices will be deployed as AI-powered glasses because that form-factor gives the best vantage point for cameras to monitor our field of view, although camera-enabled earbuds will be available too. The other benefit of glasses is that they can be enhanced to display visual content, enabling the AI to provide silent assistance as text, images, and realistic immersive elements that are integrated spatially into our world. Also, sensored glasses and earbuds will allow us to respond silently to our AI assistants with simple head nod gestures of agreement or rejection, as we naturally do with other people. ... On the other hand, deploying intelligent systems that whisper in your ears as you go about your life could easily be abused as a dangerous form of targeted influence.


How to Optimize Last-Mile Delivery in the Age of AI

Technology is at the heart of all advancements in last-mile delivery. For instance, a typical map application gives the longitude and latitude of a building — its location — and a central access point. That isn't enough data when it comes to deliveries. In addition to how much time it takes to drive or walk from point A to point B, it's also essential for a driver to understand what to do at point B. At an apartment complex, for example, they need to know what units are in each building and on which level, whether to use a front, back, or side entrance, how to navigate restricted or gated areas, and how to access parking and loading docks or package lockers. Before GenAI, third-party vendors usually acquired this data, sold it to companies, and applied it to map applications and routing algorithms to provide delivery estimates and instructions. Now, companies can use GenAI in-house to optimize routes and create solutions to delivery obstacles. Suppose the data surrounding an apartment complex is ambiguous or unclear. For instance, there may be conflicting delivery instructions — one transporter used a drop-off area, and another used a front door. Or perhaps one customer was satisfied with their delivery, but another parcel delivered to the same location was damaged or stolen. 


Cloud providers make bank with genAI while projects fail

Poor data quality is a central factor contributing to project failures. As companies venture into more complex AI applications, the demand for tailored, high-quality data sets has exposed deficiencies in existing enterprise data. Although most enterprises understood that their data could have been better, they haven’t known how bad. For years, enterprises have been kicking the data can down the road, unwilling to fix it, while technical debt gathered. AI requires excellent, accurate data that many enterprises don’t have—at least, not without putting in a great deal of work. This is why many enterprises are giving up on generative AI. The data problems are too expensive to fix, and many CIOs who know what’s good for their careers don’t want to take it on. The intricacies in labeling, cleaning, and updating data to maintain its relevance for training models have become increasingly challenging, underscoring another layer of complexity that organizations must navigate. ... The disparity between the potential and practicality of generative AI projects is leading to cautious optimism and reevaluations of AI strategies. This pushes organizations to carefully assess the foundational elements necessary for AI success, including robust data governance and strategic planning—all things that enterprises are considering too expensive and too risky to deploy just to make AI work.


Why cybersecurity needs a better model for handling OSS vulnerabilities

Identifying vulnerabilities and navigating vulnerability databases is of course only part of the dependency problem; the real work lies in remediating identified vulnerabilities impacting systems and software. Aside from general bandwidth challenges and competing priorities among development teams, vulnerability management also suffers from challenges around remediation, such as the real potential that implementing changes and updates can potentially impact functionality or cause business disruptions. ... Reachability analysis “offers a significant reduction in remediation costs because it lowers the number of remediation activities by an average of 90.5% (with a range of approximately 76–94%), making it by far the most valuable single noise-reduction strategy available,” according to the Endor report. While the security industry can beat the secure-by-design drum until they’re blue in the face and try to shame organizations into sufficiently prioritizing security, the reality is that our best bet is having organizations focus on risks that actually matter. ... In a world of competing interests, with organizations rightfully focused on business priorities such as speed to market, feature velocity, revenue and more, having developers quit wasting time and focus on the 2% of vulnerabilities that truly present risks to their organizations would be monumental.


The new calling of CIOs: Be the moral arbiter of change

Unfortunately, establishing a strategy for democratizing innovation through gen AI is far from straightforward. Many factors, including governance, security, ethics, and funding, are important, and it’s hard to establish ground rules. ... What’s clear is tech-led innovation is no longer the sole preserve of the IT department. Fifteen years ago, IT was often a solution searching for a problem. CIOs bought technology systems, and the rest of the business was expected to put them to good use. Today, CIOs and their teams speak with their peers about their key challenges and suggest potential solutions. But gen AI, like cloud computing before it, has also made it much easier for users to source digital solutions independently of the IT team. That high level of democratization doesn’t come without risks, and that’s where CIOs, as the guardians of enterprise technology, play a crucial role. IT leaders understand the pain points around governance, implementation, and security. Their awareness means responsibility for AI, and other emerging technologies have become part of a digital leader’s ever-widening role, says Rahul Todkar, head of data and AI at travel specialist Tripadvisor.


5 Strategies For Becoming A Purpose-Driven Leader

Purpose-driven leaders are fueled by more than sheer ambition; they are driven by a commitment to make a meaningful impact. They inspire those around them to pursue a shared purpose each day. This approach is especially powerful in today’s workforce, where 70% of employees say their sense of purpose is closely tied to their work, according to a recent report by McKinsey. Becoming a purpose-driven leader requires clarity, strategic foresight, and a commitment to values that go beyond the bottom line. ... Aligning your values with your leadership style and organizational goals is essential for authentic leadership. “Once you have a firm grasp of your personal values, you can align them with your leadership style and organizational goals. This alignment is crucial for maintaining authenticity and ensuring that your decisions reflect your deeper sense of purpose,” Blackburn explains. ... Purpose-driven leaders embody the values and behaviors they wish to see reflected in their teams. Whether through ethical decision-making, transparency, or resilience in the face of challenges, purpose-driven leaders set the tone for how others in the organization should act. By aligning words with actions, leaders build credibility and trust, which are the foundations of sustainable success.


Chaos Engineering: The key to building resilient systems for seamless operations

The underlying philosophy of Chaos Engineering is to encourage building systems that are resilient to failures. This means incorporating redundancy into system pathways, so that the failure of one path does not disrupt the entire service. Additionally, self-healing mechanisms can be developed such as automated systems that detect and respond to failures without the need for human intervention. These measures help ensure that systems can recover quickly from failures, reducing the likelihood of long-lasting disruptions. To effectively implement Chaos Engineering and avoid incidents like the payments outage, organisations can start by formulating hypotheses about potential system weaknesses and failure points. They can then design chaos experiments that safely simulate these failures in controlled environments. Tools such as Chaos Monkey, Gremlin, or Litmus can automate the process of failure injection and monitoring, enabling engineers to observe system behaviour in response to simulated disruptions. By collecting and analysing data from these experiments, organisations can learn from the failures and use these insights to improve system resilience. This process should be iterative, and organisations should continuously run new experiments and refine their systems based on the results.


Shifting left with telemetry pipelines: The future of data tiering at petabyte scale

In the context of observability and security, shifting left means accomplishing the analysis, transformation, and routing of logs, metrics, traces, and events very far upstream, extremely early in their usage lifecycle — a very different approach in comparison to the traditional “centralize then analyze” method. By integrating these processes earlier, teams can not only drastically reduce costs for otherwise prohibitive data volumes, but can even detect anomalies, performance issues, and potential security threats much quicker, before they become major problems in production. The rise of microservices and Kubernetes architectures has specifically accelerated this need, as the complexity and distributed nature of cloud-native applications demand more granular and real-time insights, and each localized data set is distributed when compared to the monoliths of the past. ... As telemetry data continues to grow at an exponential rate, enterprises face the challenge of managing costs without compromising on the insights they need in real time, or the requirement of data retention for audit, compliance, or forensic security investigations. This is where data tiering comes in. Data tiering is a strategy that segments data into different levels based on its value and use case, enabling organizations to optimize both cost and performance.


A Transformative Journey: Powering the Future with Data, AI, and Collaboration

The advancements in industrial data platforms and contextualization have been nothing short of remarkable. By making sense of data from different systems—whether through 3D models, images, or engineering diagrams—Cognite is enabling companies to build a powerful industrial knowledge graph, which can be used by AI to solve complex problems faster and more effectively than ever before. This new era of human-centric AI is not about replacing humans but enhancing their capabilities, giving them the tools to make better decisions, faster. Without the buy in from the people who will be affected by any new innovation or technology the probability of success is unlikely. Engaging these individuals early on in the process to solve the issues they find challenging, mundane, or highly repetitive, is critical to driving adoption and creating internal champions to further catalyze adoption. In a fascinating case study shared by one of Cognite’s partners, we learned about the transformative potential of data and AI in the chemical manufacturing sector. A plant operator described how the implementation of mobile devices powered by Cognite’s platform has drastically improved operational efficiency. 


Four Steps to Balance Agility and Security in DevSecOps

Tools like OWASP ZAP and Burp Suite can be integrated into continuous integration/continuous delivery (CI/CD) pipelines to automate security testing. For example, LinkedIn uses Ansible to automate its infrastructure provisioning, which reduces deployment times by 75%. By automating security checks, LinkedIn ensures that its rapid delivery processes remain secure. Automating security not only enhances speed but also improves the overall quality of software by catching issues before they reach production. Automated tools can perform static code analysis, vulnerability scanning and penetration testing without disrupting the development cycle, helping teams deploy secure software faster. ... As organizations look to the future, artificial intelligence (AI) and machine learning (ML) will play a crucial role in enhancing both security and agility. AI-driven security tools can predict potential vulnerabilities, automate incident response and even self-heal systems without human intervention. This not only improves security but also reduces the time spent on manual security reviews. AI-powered tools can analyze massive amounts of data, identifying patterns and potential threats that human teams may overlook. This can reduce downtime and the risk of cyberattacks, ultimately allowing organizations to deploy faster and more securely.



Quote for the day:

"If you are truly a leader, you will help others to not just see themselves as they are, but also what they can become." -- David P. Schloss

Daily Tech Digest - September 04, 2024

What is HTTP/3? The next-generation web protocol

HTTPS will still be used as a mechanism for establishing secure connections, but traffic will be encrypted at the HTTP/3 level. Another way to say it is that TLS will be integrated into the network protocol instead of working alongside it. So, encryption will be moved into the transport layer and out of the app layer. This means more security by default—even the headers in HTTP/3 are encrypted—but there is a corresponding cost in CPU load. Overall, the idea is that communication will be faster due to improvements in how encryption is negotiated, and it will be simpler because it will be built-in at a lower level, avoiding the problems that arise from a diversity of implementations. ... In TCP, that continuity isn’t possible because the protocol only understands the IP address and port number. If either of those changes—as when you walk from one network to another while holding a mobile device—an entirely new connection must be established. This reconnection leads to a predictable performance degradation. The QUIC protocol introduces connection IDs or CIDs. For security, these are actually CID sets negotiated by the server and client. 


6 things hackers know that they don’t want security pros to know that they know

It’s not a coincidence that many attacks happen at the most challenging of times. Hackers really do increase their attacks on weekends and holidays when security teams are lean. And they’re more likely to strike right before lunchtime and end-of-day, when workers are rushing and consequently less attentive to red flags indicating a phishing attack or fraudulent activity. “Hackers typically deploy their attacks during those times because they’re less likely to be noticed,” says Melissa DeOrio, global threat intelligence lead at S-RM, a global intelligence and cybersecurity consultancy. ... Threat actors actively engage in open-source intelligence (OSINT) gathering, looking for information they can use to devise attacks, Carruthers says. It’s not surprising that hackers look for news about transformative events such as big layoffs, mergers and the like, she says. But CISOs, their teams and other executives may be surprised to learn that hackers also look for news about seemingly innocuous events such as technology implementations, new partnerships, hiring sprees, and executive schedules that could reveal when they’re out of the office.


Take the ‘Shift Left’ Approach a Step Further by ‘Starting Left’

This makes it vital to guarantee code quality and security from the start so that nothing slips through the cracks. Shift left accounts for this. It minimizes risks of bugs and vulnerabilities by introducing code testing and analysis earlier in the SLDC, catching problems before they mount and become trickier to solve or even find. Advancing testing activities earlier puts DevOps teams in a position to deliver superior-quality software to customers with greater frequency. As a practice, “shift left” requires a lot more vigilance in today’s security landscape. But most development teams don’t have the mental (or physical) bandwidth to do it properly — even though it should be an intrinsic part of code development strategy. In fact, the Linux Foundation revealed in a study recently that almost one-third of developers aren’t familiar with secure software development practices. “Shifting left” — performing analysis and code reviews earlier in the development process — is a popular mindset for creating better software. What the mindset should be, though, is to “start left,” not just impose the burden later on in the SDLC for developers. ... This mindset of “start left” focuses not only on an approach that values testing early and often, but also on using the best tools to do so. 


ONCD Unveils BGP Security Road Map Amid Rising Threats

The guidance comes amid an intensified threat landscape for BGP, which serves as the backbone of global internet traffic routing. BGP is a foundational yet vulnerable protocol, developed at a time when many of today's cybersecurity risks did not exist. Coker said the ONCD is committed to covering at least 60% of the federal government's IP space by registration service agreements "by the end of this calendar year." His office recently led an effort to develop a federal RSA template that federal agencies can use to facilitate their adoption of Resource Public Key Infrastructure, which can be used to mitigate BGP vulnerabilities. ... The ONCD report underscores how BGP "does not provide adequate security and resilience features" and lacks critical security capabilities, including the ability to validate the authority of remote networks to originate route announcements and to ensure the authenticity and integrity of routing information. The guidance tasks network operators with developing and periodically updating cybersecurity risk management plans that explicitly address internet routing security and resilience. It also instructs operators to identify all information systems and services internal to the organization that require internet access and assess the criticality of maintaining those routes for each address.


Efficient DevSecOps Workflows With a Little Help From AI

When it comes to software development, AI offers lots of possibilities to enhance workflows at every stage—from splitting teams into specialized roles such as development, operations, and security to facilitating typical steps like planning, managing, coding, testing, documentation, and review. AI-powered code suggestions and generation capabilities can automate tasks like autocompletion and identification of missing dependencies, making coding more efficient. Additionally, AI can provide code explanations, summarizing algorithms, suggesting performance improvements, and refactoring long code into object-oriented patterns or different languages. ... Instead of manually sifting through job logs, AI can analyze them and provide actionable insights, even suggesting fixes. By refining prompts and engaging in conversations with the AI, developers can quickly diagnose and resolve issues, even receiving tips for optimization. Security is crucial, so sensitive data like passwords and credentials must be filtered before analysis. A well-crafted prompt can instruct the AI to explain the root cause in a way any software engineer can understand, accelerating troubleshooting. This approach can significantly improve developer efficiency.


PricewaterhouseCoopers’ new CAIO – workers need to know their role with AI

“AI is becoming a natural part of everything we make and do. We’re moving past the AI exploration cycle, where managing AI is no longer just about tech, it is about helping companies solve big, important and meaningful problems that also drive a lot of economic value. “But the only way we can get there is by bringing AI into an organization’s business strategy, capability systems, products and services, ways of working and through your people. AI is more than just a tool — it can be viewed as a member of the team, embedding into the end-to-end value chain. The more AI becomes naturally embedded and intrinsic to an organization, the more it will help both the workforce and business be more productive and deliver better value. “In addition, we will see new products and services that are fully AI-powered come into the market — and those are going to be key drivers of revenue and growth.” ... You need to consider the bigger picture, understanding how AI is becoming integrated in all aspects of your organization. That means having your RAI leader working closely with your company’s CAIO (or equivalent) to understand changes in your operating model, business processes, products and services.


What Is Active Metadata and Why Does It Matter?

Active metadata’s ability to update automatically whenever the data it describes changes now extends beyond the data profile itself to enhance the management of data access, classification, and quality. Passive metadata’s static nature limits its use to data discovery, but the dynamic nature of active metadata delivers real-time insights into the data’s lineage to help automate data governance: Get a 360-degree view of data - Active metadata’s ability to auto-update ensures that metadata delivers complete and up-to-date descriptions of the data’s lineage, context, and quality. Companies can tell at a glance whether the data is being used effectively, appropriately, and in compliance with applicable regulations. Monitor data quality in real time - Automatic metadata updates improve data quality management by providing up-to-the-minute metrics on data completeness, accuracy, and consistency. This allows organizations to identify and respond to potential data problems before they affect the business. Patch potential governance holes - Active metadata allows data governance rules to be enforced automatically to safeguard access to the data, ensure it’s appropriately classified, and confirm it meets all data retention requirements. 


How to Get IT and Security Teams to Work Together Effectively

Successful collaboration requires a sense of shared mission, Preuss says. Transparency is crucial. "Leverage technology and automation to effectively share information and challenges across both teams," she advises. Building and practicing trust and communication in an environment that's outside the norm is also essential. One way to do so is by conducting joint business resilience drills. "Whether a cyber war game or an environmental crisis [exercise], resilience drills are one way to test the collaboration between teams before an event occurs." ... When it comes to cross-team collaboration, Scott says it's important for members to understand their communication style as well as the communication styles of the people they work with. "At Immuta, we do this through a DiSC assessment, which each employee is invited to complete upon joining the company." To build an overall sense of cooperation and teamwork, Jeff Orr, director of research, digital technology at technology research and advisory firm ISG, suggests launching an exercise simulation in which both teams are required to collaborate in order to succeed. 


Protecting national interests: Balancing cybersecurity and operational realities

A significant challenge we face today is safeguarding the information space against misinformation, disinformation, manipulation and deceptive content. Whether this is at the behest of nation-states, or their supporters, it can be immensely destabilising and disruptive. We must find a way to tackle this challenge, but this should not just focus on the responsibilities held by social media platforms, but also on how we can detect targeted misinformation, counter those narratives and block the sources. Technology companies have a key role in taking down content that is obviously malicious, but we need the processes to respond in hours, rather than days and weeks. More generally, infrastructure used to launch attacks can be spun up more quickly than ever and attacks manifest at speed. This requires the government to work more closely with major technology and telecommunication providers so we can block and counter these threats – and that demands information sharing mechanisms and legal frameworks which enable this. Investigating and countering modern transnational cybercrime demands very different approaches, and of course AI will undoubtedly play a big part in this, but sadly both in attack and defence.


How leading CIOs cultivate business-centric IT

With digital strategy and technology as the brains behind most business functions and operating models, IT organizations are determined to inject more business-centricity into their employee DNA. IT leaders have been burnishing their business acumen and embracing a non-technical remit for some time. Now, there’s a growing desire to infuse that mentality throughout the greater IT organization, stretching beyond basic business-IT alignment to creating a collaborative force hyper-fixated on channeling innovation to advance enterprise business goals. “IT is no longer the group in the rear with the gear,” says Sabina Ewing, senior vice president of business and technology services and CIO at Abbott Laboratories. ... While those with robust experience and expertise in highly technical areas such as cloud architecture or cybersecurity are still highly coveted, IT organizations like Duke Health, ServiceNow, and others are also seeking a very different type of persona. Zoetis, a leading animal health care company, casts a wider net when seeking tech and digital talent, focusing on those who are collaborative, passionate about making a difference, and adaptable to change. Candidates should also have a strong understanding of technology application, says CIO Keith Sarbaugh.



Quote for the day:

''When someone tells me no, it doesn't mean I can't do it, it simply means I can't do it with them.'' -- Karen E. Quinones Miller

Daily Tech Digest - May 29, 2024

Algorithmic Thinking for Data Scientists

While data scientists with computer science degrees will be familiar with the core concepts of algorithmic thinking, many increasingly enter the field with other backgrounds, ranging from the natural and social sciences to the arts; this trend is likely to accelerate in the coming years as a result of advances in generative AI and the growing prevalence of data science in school and university curriculums. ... One topic that deserves special attention in the context of algorithmic problem solving is that of complexity. When comparing two different algorithms, it is useful to consider the time and space complexity of each algorithm, i.e., how the time and space taken by each algorithm scales relative to the problem size (or data size). ... Some algorithms may manifest additive or multiplicative combinations of the above complexity levels. E.g., a for loop followed by a binary search entails an additive combination of linear and logarithmic complexities, attributable to sequential execution of the loop and the search routine, respectively.


Job seekers and hiring managers depend on AI — at what cost to truth and fairness?

The darker side to using AI in hiring is that it can bypass potential candidates based on predetermined criteria that don’t necessarily take all of a candidate’s skills into account. And for job seekers, the technology can generate great-looking resumes, but often they’re not completely truthful when it comes to skill sets. ... “AI can sound too generic at times, so this is where putting your eyes on it is helpful,” Toothacre said. She is also concerned about the use of AI to complete assessments. “Skills-based assessments are in place to ensure you are qualified and check your knowledge. Using AI to help you pass those assessments is lying about your experience and highly unethical.” There’s plenty of evidence that genAI can improve resume quality, increase visibility in online job searches, and provide personalized feedback on cover letters and resumes. However, concerns about overreliance on AI tools, lack of human touch in resumes, and the risk of losing individuality and authenticity in applications are universal issues that candidates need to be mindful of regardless of their geographical location, according to Helios’ Hammell.


Comparing smart contracts across different blockchains from Ethereum to Solana

Polkadot is designed to enable interoperability among various blockchains through its unique architecture. The network’s core comprises the relay chain and parachains, each playing a distinct role in maintaining the system’s functionality and scalability. ... Developing smart contracts on Cardano requires familiarity with Haskell for Plutus and an understanding of Marlowe for financial contracts. Educational resources like the IOG Academy provide learning paths for developers and financial professionals. Tools like the Marlowe Playground and the Plutus development environment aid in simulating and testing contracts before deployment, ensuring they function as intended. ... Solana’s smart contracts are stateless, meaning the contract logic is separated from the state, which is stored in external accounts. This separation enhances security and scalability by isolating the contract code from the data it interacts with. Solana’s account model allows for program reusability, enabling developers to create new tokens or applications by interacting with existing programs, reducing the need to redeploy smart contracts, and lowering costs.


3 things CIOs can do to make gen AI synch with sustainability

“If you’re only buying inference services, ask them how they can account for all the upstream impact,” says Tate Cantrell, CTO of Verne, a UK-headquartered company that provides data center solutions for enterprises and hyperscalers. “Inference output takes a split second. But the only reason those weights inside that neural network are the way they are is because of massive amounts of training — potentially one or two months of training at something like 100 to 400 megawatts — to get that infrastructure the way it is. So how much of that should you be charged for?” Cantrell urges CIOs to ask providers about their own reporting. “Are they doing open reporting about the full upstream impact that their services have from a sustainability perspective? How long is the training process, how long is it valid for, and how many customers did that weight impact?” According to Sundberg, an ideal solution would be to have the AI model tell you about its carbon footprint. “You should be able to ask Copilot or ChatGPT what the carbon footprint of your last query is,” he says. 


EU’s ChatGPT taskforce offers first look at detangling the AI chatbot’s privacy compliance

The taskforce’s report discusses this knotty lawfulness issue, pointing out ChatGPT needs a valid legal basis for all stages of personal data processing — including collection of training data; pre-processing of the data (such as filtering); training itself; prompts and ChatGPT outputs; and any training on ChatGPT prompts. The first three of the listed stages carry what the taskforce couches as “peculiar risks” for people’s fundamental rights — with the report highlighting how the scale and automation of web scraping can lead to large volumes of personal data being ingested, covering many aspects of people’s lives. It also notes scraped data may include the most sensitive types of personal data (which the GDPR refers to as “special category data”), such as health info, sexuality, political views etc, which requires an even higher legal bar for processing than general personal data. On special category data, the taskforce also asserts that just because it’s public does not mean it can be considered to have been made “manifestly” public — which would trigger an exemption from the GDPR requirement for explicit consent to process this type of data.


Avoiding the cybersecurity blame game

Genuine negligence or deliberate actions should be handled appropriately, but apportioning blame and meting out punishment must be the final step in an objective, reasonable investigation. It should certainly not be the default reaction. So far, so reasonable, yes? But things are a little more complicated than this. It’s all very well saying, “don’t blame the individual, blame the company”. Effectively, no “company” does anything; only people do. The controls, processes and procedures that let you down were created by people – just different people. If we blame the designers of controls, processes and procedures… well, we are just shifting blame, which is still counterproductive. ... Managers should use the additional resources to figure out how to genuinely change the work environment in which employees operate and make it easier for them to do their job in a secure practical manner. Managers should implement a circular, collaborative approach to creating a frictionless, safer environment, working positively and without blame.


The decline of the user interface

The Ok and Cancel buttons played important roles. A user might go to a Settings dialog, change a bunch of settings, and then click Ok, knowing that their changes would be applied. But often, they would make some changes and then think “You know, nope, I just want things back like they were.” They’d hit the Cancel button, and everything would reset to where they started. Disaster averted. Sadly, this very clear and easy way of doing things somehow got lost in the transition to the web. On the web, you will often see Settings pages without Ok and Cancel buttons. Instead, you’re expected to click an X in the upper right to make the dialog close, accepting any changes that you’ve made. ... In the newer versions of Windows, I spend a dismayingly large amount of time trying to get the mouse to the right spot in the corner or edge of an application so that I can size it. If I want to move a window, it is all too frequently difficult to find a location at the top of the application to click on that will result in the window being relocated. Applications used to have a very clear title bar that was easy to see and click on.


Lawmakers paint grim picture of US data privacy in defending APRA

At the center of the debate is the American Privacy Rights Act (APRA), the push for a federal data privacy law that would either simplify a patchwork of individual state laws – or run roughshod over existing privacy legislation, depending on which state is offering an opinion. While harmonizing divergent laws seems wise as a general measure, states like California, where data privacy laws are already much stricter than in most places, worry about its preemptive clauses weakening their hard-fought privacy protections. Rodgers says APRA is “an opportunity for a reset, one that can help return us to the American Dream our Founders envisioned. It gives people the right to control their personal information online, something the American people overwhelmingly want,” she says. “They’re tired of having their personal information abused for profit.” From loose permissions on sharing location data to exposed search histories, there are far too many holes in Americans’ digital privacy for Rodgers’ liking. Pointing to the especially sensitive matter of childrens’ data, she says that “as our kids scroll, companies collect nearly every data point imaginable to build profiles on them and keep them addicted. ...”


Picking an iPaaS in the Age of Application Overload

Companies face issues using proprietary integration solutions, as they end up with black-box solutions with limited flexibility. For example, the inability to natively embed outdated technology into modern stacks, such as cloud native supply chains with CI/CD pipelines, can slow down innovation and complicate the overall software delivery process. Companies should favor iPaaS technologies grounded in open source and open standards. Can you deploy it to your container orchestration cluster? Can you plug it into your existing GitOps procedures? Such solutions not only ensure better integration into proven QA-tested procedures but also offer greater freedom to migrate, adapt and debug as needs evolve. ... As organizations scale, so too must their integration solutions. Companies should avoid iPaaS solutions offering only superficial “cloud-washed” capabilities. They should prioritize cloud native solutions designed from the ground up for the cloud, and that leverage container orchestration tools like Kubernetes and Docker Swarm, which are essential for ensuring scalability and resilience.
Shifting left is a cultural and practice shift, but it also includes technical changes to how a shared testing environment is set up. ... The approach scales effectively across engineering teams, as each team or developer can work independently on their respective services or features, thereby reducing dependencies. While this is great advice, it can feel hard to implement in the current development environment: If the process of releasing code to a shared testing cluster takes too much time, it doesn’t seem feasible to test small incremental changes. ... The difference between finding bugs as a user and finding them as a developer is massive: When an operations or site reliability engineer (SRE) finds a problem, they need to find the engineer who released the code, describe the problem they’re seeing, and present some steps to replicate the issue. If, instead, the original developer finds the problem, they can cut out all those steps by looking at the output, finding the cause, and starting on a fix. This proactive approach to quality reduces the number of bugs that need to be filed and addressed later in the development cycle.



Quote for the day:

"The best and most beautiful things in the world cannot be seen or even touched- they must be felt with the heart." -- Helen Keller

Daily Tech Digest - March 10, 2024

What’s the privacy tax on innovation?

A few decades ago, California had one of the strongest definitions for certifying Organic foods in the US. Eventually, the US government stepped in with a watered-down definition. Despite the pain of new privacy controls, the US data broker industry will lobby for a similar approach to at least harmonize privacy regulations at the Federal level that limit the impact on their business models when operating across state lines. For businesses and consumers, a more equitable approach would be to add a few more teeth to the cost of data misuse arising from legal sales, employee theft, or breaches. A few high-profile payouts arising from theft or when this data is used as part of multi-million dollar ransomware attacks on critical business systems would have a focusing effect on better privacy management practices. Another option is to turn to banks as holders of trust. Banks may be a good first point for managing the financial data we directly share with them. But what about all the data that others gather that may not be tied to traditional identifiers like social security numbers (SSN) used to unify data, such as IP addresses, phone numbers, Wi-Fi hubs, or the trail of GPS dots that gravitate to your home or office?


Living with the ghost of a smart home’s past

There were the window shades that always opened at 8AM and always closed at sundown. My brother disconnected everything that looked like a hub, and still, operating on some inaccessible internal clock, the shades carried on as they were once programmed to do. ... This is the state of home ownership in 2024! People have been making their homes smart with off-the-shelf parts for well over a decade now. Sometimes they sell those homes, and the new homeowners find themselves mired in troubleshooting when they should be trying to pick out wall colors. Some former homeowners will provide onboarding to the home’s smart home system, but most do as the guy who used to own my brother’s house did. They walk away and leave it as an adventure for the next person. ... I really hope the new renters of my old Brooklyn walk-up appreciate all the 2014 Philips Hue lights I left installed in the basement. There’s a calculus you make as you’re moving. It’s a hectic time, and there’s a lot to be done. Do you want to spend half the day freeing all those Hue bulbs from their obnoxious and broken recessed light housings, or do you want to leave a potential gift for the next homeowner and get started on nesting in your new place? 


Overcoming the AI Privacy Predicament

According to one study by Brookings, while 57% of consumers felt that AI will have a net negative impact on privacy, 34% were unsure about how AI would affect their privacy. Indeed, AI evokes a mixed set of thoughts and emotions in consumers. For most people, the promise of AI is clear: from increasing efficiency, to automating mundane tasks and freeing up more time for creative work, to improving outcomes in areas such as healthcare and education. ... In the realm of AI, the lack of trust is significant. Indeed, 81% of consumers think the information collected by AI companies will be used in ways people are uncomfortable with, as well as in ways that were not originally intended. That consumers are put in a seemingly impossible predicament regarding their privacy leaves them little choice but to a.) consent, or b.) forgo use of the product or service. Both choices leave consumers wanting more from the digital economy. When a new technology has negative implications for privacy, consumers have shown they are willing to engage in privacy-protective behaviors, such as deleting an app, withholding personal information, or abandoning an online purchase altogether.


How Static Analysis Can Save Your Software

While static analysis is a means of pattern detection, fixing an actual bug (for example, dereferencing a null pointer) is much harder, albeit possible. It becomes mathematically difficult to track exponentially increasing possible states. We call this “path explosion.” Say you’re writing code that, given two integers, divides one by the other, and there are various failure modes depending on the integers’ values. But what if the denominator is zero? That results in undefined behavior, and it means you need to look at where those integers came from, their possible values and what branches they took along the way. If you can see that the denominator is checked against zero before the division — and branches away if it is — you should be safe from division-by-zero issues. This theoretical stepping through stages of code is called “symbolic execution.” It’s not too complicated if the checkpoint is fairly close to the division process, but the further away it gets, the more branches you must account for. Crossing the function boundary gets even trickier. But once you have calls from other translation units, the problem becomes intractable in the general case. 


Avoiding Shift Left Exhaustion – Part 1

Shift left requires developers to be involved in testing, quality assurance, and collaboration throughout the development cycle. While this is undoubtedly beneficial for the final product, it can lead to an increased workload for developers who must balance their coding responsibilities with testing and problem-solving tasks. ... Adapting to Shift left practices often requires developers to acquire new skills and stay current with the latest testing methodologies and tools. This continuous learning can be intellectually stimulating and exhausting, especially in an industry that evolves rapidly. Developers must understand new tools, processes, and technologies as more things get moved earlier in the development lifecycle. ... The added pressure of early and continuous testing and the demand for faster development cycles can lead to developer burnout. When developers are overburdened, their creativity and productivity may suffer, ultimately impacting the software quality they produce. ... Shifting testing and quality assurance left in the development process may impose strict time constraints. Developers may feel pressured to meet tight deadlines, which can be stressful and lead to rushed decision-making, potentially compromising the software’s quality.


Ransomware Attacks on Critical Infrastructure Are Surging

Especially under fire are critical services. Healthcare and public health agencies dominated, filing 249 reports to IC3 last year over ransomware attacks, followed by 218 reports from critical manufacturing and 156 from government facilities. Ransomware-wielding attackers are potentially targeting these sectors most because they perceive the victims as having a proclivity to pay, given the risk to life or essential business processes posed by their systems being disrupted. Last year, IC3 received a ransomware report from at least one victim in all of the 16 critical infrastructure sectors - which include financial services, food and agriculture, energy and communications - except for two: dams and nuclear reactors, materials and waste. The ransomware group tied to the largest number of successful attacks against critical infrastructure reported to IC3 last year was LockBit, followed by Alphv/BlackCat, Akira, Royal and Black Basta. Law enforcement recently disrupted Alphv/BlackCat, as well as LockBit, after which each group separately claimed to have rebooted before appearing to go dark. 


What’s the missing piece for mainstream Web3 adoption?

Today’s Web3 lacks a unifying ecosystem, causing the market to fracture into multiple, independently evolving use cases. Crypto enthusiasts have to use various decentralized applications (DApps) and platforms to perform multiple transactions and interact with the different sectors of Web3. However, this isn’t a sustainable growth model for the Web3 industry and is more of a deterrent rather than a benefit when it comes to crypto adoption. ... Recognizing the need for a more integrated approach, some Web3 players are moving beyond the hype. Legion Network is emerging as a notable example among these. As a one-stop shop for Web3, Legion Network addresses the complexity of the industry and reaches new audiences. It brings together essential Web3 use cases, including a proprietary crypto wallet with comprehensive portfolio tracking, DeFi swaps and bridges, engaging play-to-earn/win games, captivating quests with prize rewards, a launchpad for emerging projects and a unique SocialFi experience that fosters community engagement.


What’s Driving Changes in Open Source Licensing?

In response to the challenges posed by cloud computing, some vendor-driven open source projects have changed their licenses or their GTM models. For example, MongoDB, Elastic, Confluent, Redis Labs and HashiCorp have adopted new licenses that restrict the use of their software-as-a-service by third parties or require them to pay fees or share their modifications. These changes are intended to protect the revenue and sustainability of the original vendors and to ensure that they can continue to invest in the open source project. However, these changes have also caused some controversy and backlash from the user community, who may feel that the project is becoming less open and more proprietary or that they are losing some of the benefits and freedoms of open source. However, community-driven open source projects have largely maintained their permissive licenses and their collaborative approach. These projects still benefit from the diversity and scale of their user community, who contribute to the development, maintenance, support and security of the software. These projects also leverage the support of organizations and foundations, such as the Linux Foundation, the Apache Software Foundation and the CNCF, who provide governance, funding and infrastructure. 


Botnets: The uninvited guests that just won’t leave

Reducing response time is vital. The longer the dwell time, the more likely it is that botnets can impact a business, particularly given that botnets can spread across many devices in a short period. How can security teams improve detection processes and shrink the time it takes to respond to malicious activity? Security practitioners should have multiple tools and strategies at their disposal to protect their organization’s networks against botnets. An obvious first step is to prevent access to all recognized C2 databases. Next, leverage application control to restrict unauthorized access to your systems. Additionally, use Domain Name System (DNS) filtering to target botnets explicitly, concentrating on each category or website that might expose your system to them. DNS filtering also helps to mitigate the Domain Generation Algorithms that botnets often use. Monitoring data while it enters and leaves devices is vital as well, as you can spot botnets as they attempt to infiltrate your computers or those connected to them. This is what makes security information and event management technology paired with malicious indicators of compromise detections so critical to protecting against bots. 


Are You Ready to Protect Your Company From Insider Threats? Probably Not

The real problem is that employees and employers don’t trust each other. This is an enormous risk for employees, as this environment makes it more likely that insider threats, security risks that originate from within the company, will emerge or intensify when tensions are high and motivations, including financial strain, dissatisfaction or desperation, drive individuals to act against their own organization. That’s the bad news. The worst news is that most companies are unprepared to meet the moment. ... Insider threats often betray their motivation. Sometimes, they tell colleagues about their intentions. Other times, their actions speak louder than words, as attempts to work around security protocols, active resentment for coworkers or leadership or general job dissatisfaction can be a red flag that an insider threat is about to act. Explaining the impact of human intelligence, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) writes, “An organization’s own personnel are an invaluable resource to observe behaviors of concern, as are those who are close to an individual, such as family, friends, and coworkers.”



Quote for the day:

"Leaders must be close enough to relate to others, but far enough ahead to motivate them." -- John C. Maxwell