Daily Tech Digest - November 27, 2024

Cybersecurity’s oversimplification problem: Seeing AI as a replacement for human agency

One clear solution to the problem of technology oversimplification is to tailor AI training and educational initiatives towards diverse endpoints. Research clearly demonstrates that know-how of the underlying functions of security professions has a real mediating effect on the excesses of encountering disruptive, unfamiliar conditions. The mediation of this effect by the oversimplification mentality, unfortunately, suggests that more is required. Specifically, discussion of the foundational functionality of AI systems needs to be married to as many diverse outcomes as possible to emphasize the dynamism of the technology. ... Naturally, one of the value propositions of studies like the one presented here is the ability for professionals to see the world as another kind of professional might. Whilst tabletop exercises are already a core tool of the cybersecurity profession, there are opportunities to incorporate comparative applications’ learning for AI using simple simulations. ... Finally, wherever possible, role rotation is of clear advantage to overcoming the issues illustrated herein. In testing, the diversity of career roles over and above career length played a similar role in mitigating the excesses of the impact of novel conditions on response priorities.


How to Create an Accurate IT Project Timeline

Building resilient project plans that can handle unforeseen, yet often inevitable changes, is key to ensuring timeline accuracy. "Understanding dependencies, identifying bottlenecks, and planning delivery around these constraints have shown to be important for timeline accuracy," Chandrasekar says. Project accuracy also depends on clear communication and tracking. "It's critical to consistently review timelines with your project team and stakeholders, making updates as new information is discovered," Naqib says. He adds that project timelines should be tracked with the support of a work management tool, such as SmartSheet or Jira, in order to measure progress and identify gaps. Yet even with perfect planning, unanticipated delays or changes may occur. Proper planning and communication are key to assuring timeline accuracy, says Anne Gee, director of delivery excellence for IT managed services at data and technology consulting firm Resultant. ... The best way to get a lagging timeline back on schedule is to work with your project team to identify the root cause, Naqib advises. "Then, you can work with your team and your greater organization to explore possible resolution accelerators that will keep your timeline on track."


Shaping the Future of AI Benchmarking – Trends & Challenges

AI benchmarking serves as a foundational tool for evaluating and advancing artificial intelligence systems. Its primary objectives address critical aspects of AI development, ensuring that models are efficient, effective, and aligned with real-world needs. ... Benchmarks provide valuable insights into a model’s limitations, serving as a roadmap for enhancement. For instance: Identifying Bottlenecks: If a model struggles with inference speed or accuracy on specific data types, benchmarks highlight these areas for targeted optimization. Algorithm Development: Benchmarks inspire innovation by exposing gaps in performance, encouraging the development of new algorithms or architectural designs. Data Quality Assessment: Poor performance on benchmarks may indicate issues with training data, prompting better preprocessing, augmentation, or dataset refinement techniques. ... AI benchmarking involves a systematic process to evaluate the performance of AI models using rigorous methodologies. These methodologies ensure that assessments are fair, consistent, and meaningful, enabling stakeholders to make informed decisions about model performance and applicability.


Why data is the hottest commodity in cybersecurity

“The value of data has skyrocketed in recent years, transforming it into one of the most sought-after commodities in the digital age. The rise of AI and machine learning has only amplified the threat to data, as attackers can now automate their efforts and create more sophisticated and targeted campaigns.” Saceanu noted that Irish organisations, like those globally, are struggling to secure their systems and private information, with industries that typically hold sensitive data, such as those in healthcare, finance and education, being particularly vulnerable. “We have seen a massive focus on targeting organisations that operate in critical infrastructure for various motivations – financially oriented or to disrupt operations. This means that there are more and more ransomware attacks on manufacturing, energy and healthcare that are not only encrypting data, but also exfiltrating this data to ask for enormous ransom payments because they know that these organisations cannot afford any disruption.” For Saceanu, this shift to an environment driven by data and under near constant threat has led organisations to experiment with advanced technologies such as AI in order to improve efficiency and spearhead innovation


Proper ID Verification Requires Ethical Technology

When it comes to identity security, security teams should regularly monitor, identify, analyze, and report risks in their environment. If exploited, these risks can be detrimental to an organization, its assets, and stakeholders. They can also undercut ethical standards of privacy and data protection. Running risk assessments is especially important when there is a lack of visibility in company processes and security gaps. Organizations can systematically assess their security measures surrounding user identity data and ensure compliance with privacy policies and regulatory standards. ... Transparency is among the most vital aspects of ethical identity verification. It requires organizations to be upfront about how they practice data collection and management, and how the data is used. This has to be reflected in the company policies, culture, and of course, its technology, including data storage and access. Users, i.e., customers from whom data is collected, should be able to access the policy terms easily at any point. ... When companies are looking to procure ethical technology, it’s important to account for factors like privacy, accessibility, security, and regulations. The above factors look at the perspective of the company using the tech and how they should operate it. 


Accelerating Business Growth Using AIOps and DevOps

The rapid evolution of AI brings forth several new potential opportunities and challenges. Today, AI drives the business growth of an enterprise in more ways than one. Artificial intelligence for IT Operations or AIOps is a new concept that encompasses big data, data mining, machine learning (ML) and AI. AIOps is a practice that blends AI with IT operations to improve operational processes. AIOps platforms automate, optimize and improve IT operations and provide users with real-time visibility and predictive alerts to minimize operational issues and proactively resolve issues that may have arisen to ensure ideal IT operations. ... Adopting AIOps helps DevOps through automation, predictive intelligence and better data-driven decisions. This collaboration fosters efficient processes, improved quality and continuous improvement to meet the ever-changing demands of the industry and customer requirements. ... AI makes it easier for DevOps teams to find patterns in data, make meaning from such data and form informed decisions on which resources and processes to allocate. The convergence of AIOps and DevOps processes can yield valuable insights that can help improve decision-making.


When is data too clean to be useful for enterprise AI?

Not cleaning your data enough causes obvious problems, but context is key. Google suggests pizza recipes with glue because that’s how food photographers make images of melted mozzarella look enticing, and that should probably be sanitized out of a generic LLM. But that’s exactly the kind of data you want to include when training an AI to give photography tips. Conversely, some of the other inappropriate advice found in Google searches might have been avoided if the origin of content from obviously satirical sites had been retained in the training set. “Data quality is extremely important, but it leads to very sequential thinking that can lead you astray,” Carlsson says. “It can end up, at best, wasting a lot of time and effort. At worst, it can go in and remove signal from your data, and actually be at cross purposes with what you need.” ... AI needs data cleaning that’s more agile, collaborative, iterative and customized for how data is being used, adds Carlsson. “The great thing is we’re using data in lots of different ways we didn’t before,” he says. “But the challenge is now you need to think about cleanliness in every one of those different ways in which you use the data.” Sometimes that’ll mean doing more work on cleaning, and sometimes it’ll mean doing less.


Architectural Intelligence – The Next AI

The vast majority of software has deterministic outcomes. If this, then that. This allows us to write unit tests and have functional requirements. If the software does something unexpected, we file a bug and rewrite the software until it does what we expect. However, we should consider AI to be non-deterministic. That doesn’t mean random, but there is an amount of unpredictability built in, and that’s by design. The feature, not a bug, is that the LLM will predict the most likely next word. "Most likely" does not mean "always guaranteed". For those of us who are used to dealing with software being predictable, this can seem like a significant drawback. However, there are two things to consider. First, GenAI, while not 100% accurate, is usually good enough. ... When considering AI components in your system design, consider where you are okay with "good enough" answers. I realize we’ve spent decades building software that does what it’s expected to do, so this may be a complex idea to think about. As a thought exercise, replace a proposed AI component with a human. How would you design your system to handle incorrect human input? Anything from UI validation to requiring a second person’s review. What if the User in User Interface is an AI? 


The Impact of Advanced Data Lineage on Governance

Advanced data lineage (ADL) provides a powerful set of tools for understanding data’s history. It is proactive and preventative, addressing data issues at that moment or before they happen. Advanced data lineage represents a significant evolution where historically, traditional data lineage tracks data movement and transformations linearly. Consequently, organizations often receive static reports that quickly become outdated in fast-changing data environments. ... As ADL transforms how organizations understand and manage their data, it requires a corresponding evolution in data governance practices. This transformation requires more than selecting the right software; it applies an adaptive framework that supports efficient assessments and actions on lineage information. An adaptive Data Governance framework is flexible enough to respond quickly to new insights provided by ADL, while still maintaining a structured approach to data management. With this shift comes increased and frequent interactions between adaptive DG teams and other departments to resolve issues. To do this well, a framework should clearly define roles, responsibilities, and escalation paths when addressing issues identified by ADL. This approach is agile while maintaining a solid methodological foundation.


Navigating AI Regulations: Key Insights and Impacts for Businesses

The historical risks associated with AI highlight the need for careful consideration and proactive management as these technologies continue to evolve. Addressing these challenges requires collaboration among technologists, policymakers, ethicists, and society at large to ensure that the development and deployment of AI provides positive contributions to society while also minimizing potential harms. AI systems raise significant data privacy concerns because they collect and process vast amounts of personal data. Regulatory frameworks establish guidelines for data protection. These ensure an individuals’ information is handled secretly, responsibly, and with their full consent. AI systems must be understandable, fair, incorporate human judgment, and be ethical. Trustworthy AI systems should perform reliably across various conditions and be resilient to errors or attacks. Developers must comply with privacy laws and safeguard personal data used in training AI models. This includes obtaining user consent for data usage and implementing strong security measures to protect sensitive information.
 


Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman

No comments:

Post a Comment