Daily Tech Digest - January 02, 2025

7 Practices to Bolster Cloud Security and Keep Attackers at Bay

AI tools can facilitate quicker threat detection, investigation, and response. All healthy cloud security postures should utilize ML-based user and entity behavior analytics (UEBA) tools. Such tools effectively identify anomalous behavior across the network, while facilitating rapid investigation of potential threats and automating responses to mitigate and remediate attacks. Ideally, security professionals want to find vulnerabilities before an attack occurs, and such AI tools can help to do just that. ... When a threat occurs in the cloud, it can sometimes be difficult to assess the potential impact across a distributed or multitenant surface. By utilizing a centralized platform, security personnel have access to a response center that can automate workflows by orchestrating with different cloud applications, which in turn reduces the mean time to resolve (MTTR) incidents and threats. ... By correlating access and security logs from cloud applications, security personnel can identify attempts at data exfiltration from the cloud. As a quick example, if a SOC professional is investigating potential customer data exfiltration from a cloud-based CRM tool, he or she would want to correlate the logs of that CRM tool with the logs of other cloud applications, such as email or team communication tools. 


6 AI-Related Security Trends to Watch in 2025

As more organizations work to embed AI capabilities into their software, expect to see DevSecOps, DataOps, and ModelOps — or the practice of managing and monitoring AI models in production — converge into a broader, all-encompassing xOps management approach, Holt says. The push to AI-enabled software is increasingly blurring the lines between traditional declarative apps that follow predefined rules to achieve specific outcomes, and LLMs and GenAI apps that dynamically generate responses based on patterns learned from training data sets, Holt says. ... The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. ... The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. ... "If unchecked, this raises serious questions and concerns about data loss prevention as well as compliance concerns as new regulations like the EU AI Act start to take effect," she says. 


Working in Cyber Threat Intelligence (CTI)

“The analysis of an adversary’s intent, opportunity, and capability to do harm is known as cyber threat intelligence.” It’s not just about finding some IOCs and sending them to the SOC. It’s about providing context about adversary activity for other security teams to help prioritize cyber defense efforts. While there are more steps than this, in short we collect intrusion data and analyze it, looking for correlations and trends to observed malicious activity. With that analyzed activity and trends, we can provide actionable insights into malicious activity to keep defenders focused only on the most relevant. ... Aside from everything in the “What CTI Isn’t” section, the biggest challenge in CTI is that it’s next to impossible to get decent intel requirements. “Just get us intel” isn’t a thing. We need information to give relevant information. What strategic initiatives, products, technologies, partnerships, etc. are of particular interest to the leadership? What are all of your countries of operation? What are considered the most critical assets? How would a threat actor achieving their objectives impede the organization’s mission? It unfortunately is an ongoing problem that many CTI analysts and CTI management struggle with. This often leads to intel analysts winging it.


What’s Ahead in Generative AI in 2025?

In the coming year, prompt engineering will continue its rapid maturation into a substantial body of proven practices for eliciting the correct output from LLMs and other foundation models. Within generative AI development tool sets, embedding libraries will become an essential component for developers to build increasingly sophisticated similarity searches that span a diverse range of data modalities. The recent TDWI survey on enterprise AI readiness shows that 28% of organizations already use or are deploying vector databases to store vector embeddings for use with AI models, while 32% plan to adopt those databases in the next few years. In addition, generative AI developers in 2025 will have access to a growing range of tools for no-code development of “agentic” applications that provide autonomous LLM-driven copilot, chatbot, and other functionality and that can be orchestrated over more complex process environments. ... Developers will have access in 2025 to a growing range of sophisticated models and data for building, training, and optimizing generative AI applications—including both commercial and open-source models. The recent TDWI survey on data and analytics trends showed that around 25% of enterprises are experimenting with private or public generative AI models, while 17% are building generative AI apps that use company data with pretrained models. 


This Is The Phrase That Instantly Damages Your Leadership Integrity

There are few phrases that have the ability to instantly cause hesitation like the phrase “to be honest with you.” Here are a few other honorable mentions that cause the same damage for the same reasons. In all honesty… Frankly… To tell you the truth… Truthfully or truthfully speaking… When you casually use a statement like “to be honest with you,” in an effort to ensure that you’re more likely to be believed, the exact opposite happens. Instead of trusting you more, listeners trust you less. ... Without leadership integrity, you’d have a very heavy lift trying to get people to believe in you, to listen to you, to count on you and to give you the benefit of the doubt that leaders so desperately need during times of uncertainty, ambiguity and crisis. This is why you don’t want to damage your leadership integrity or cause people to question your credibility by throwing out unthoughtful words or phrases that could give them pause. ... Instead of saying something like “mistakes were made,” which shows a complete lack of leadership integrity and sends the signal that someone somewhere made a mistake but you take no ownership for it. Go ahead and accept responsibility and show that you are accountable for the mistake and for the resolution as well.


Generative AI is not going to build your engineering team for you

Generative AI is like a junior engineer in that you can’t roll their code off into production. You are responsible for it—legally, ethically, and practically. You still have to take the time to understand it, test it, instrument it, retrofit it stylistically and thematically to fit the rest of your code base, and ensure your teammates can understand and maintain it as well. The analogy is a decent one, actually, but only if your code is disposable and self-contained, i.e. not meant to be integrated into a larger body of work, or to survive and be read or modified by others. And hey—there are corners of the industry like this, where most of the code is write-only, throwaway code. ... To state the supremely obvious: giving code review feedback to a junior engineer is not like editing generated code. Your effort is worth more when it is invested into someone else’s apprenticeship. It’s an opportunity to pass on the lessons you’ve learned in your own career. Even just the act of framing your feedback to explain and convey your message forces you to think through the problem in a more rigorous way, and has a way of helping you understand the material more deeply. And adding a junior engineer to your team will immediately change team dynamics. It creates an environment where asking questions is normalized and encouraged, where teaching as well as learning is a constant. 


Architectural Decision-Making: AI Tools as Consensus Builders

In an environment with lots of smart, quick-thinking people it can be a challenge to ensure everyone is heard, especially when the primary mode of interaction is videoconferencing. The online format (a Microsoft Teams group chat) gave people time to contribute their thoughts over a period of days rather than minutes. At various points in the online conversation, participants extracted content from the online discussion board and fed it to a large language model to compare ideas that were present in the dialogue, or to recast the dialogue in a particular person’s voice. ... The benefits of using AI tools are not cost free. It’s important to verify the results of an AI’s synthesis of text because sometimes the AI misinterprets what was written. For example, during our discussion of capabilities and domains, an AI tool interpreted some of my text as stating that the boundaries of a domain are context dependent when in fact, I was making the opposite argument – that a domain must have a consistent definition that is valid across any contexts in which it participates. Another consideration is the ethics of intellectual property ownership and citation of participants’ contributions. 


Perhaps the biggest challenge of IaC operations is drifts — a scenario where runtime environments deviate from their IaC-defined states, creating a festering issue that could have serious long-term implications. These discrepancies undermine the consistency of cloud environments, leading to potential issues with infrastructure reliability and maintainability and even significant security and compliance risks. ... But having additional context for drift, as important as it may be, is only one piece of a much bigger puzzle. Managing large cloud fleets with codified resources introduces more than just drift challenges, especially at scale. Current-gen IaC management tools are effective at addressing resource management, but the demand for greater visibility and control in enterprise-scale environments is introducing new requirements and driving their inevitable evolution. ... The combination of IaC management and CAM empowers teams to manage complexity with clarity and control. As the end of the year approaches, it's 'prediction season' — so here’s mine. Having spent the better part of the last decade building and refining one of the more popular IaC management platforms, I see this as the natural progression of our industry: combining IaC management, automation, and governance with enhanced visibility into non-codified assets.


4 keys for writing cross-platform apps

One big problem with cross-platform compiling is how asymmetrical it can be. If you’re a macOS user, it’s easy to set up and maintain Windows or Linux virtual machines on the Mac. If you use Linux or Windows, it’s harder to emulate macOS on those platforms. Not impossible, just more difficult—the biggest reason being the legal issues, as macOS’s EULA does not allow it to be used on non-Apple hardware. The easiest workaround is to simply buy a separate Macintosh system and use that. Another option is to use tools like osxcross to perform cross-compilation on a Linux, FreeBSD, or OpenBSD system. Another common option, one most in line with modern software delivery methods, is to use a system like GitHub Actions. The downside is paying for the use of the service, but if you’re already invested in either platform, it’s often the most economical and least messy approach. Plus, it keeps the burden of system maintenance out of your hands. ... The way we write and deploy apps is always in flux. Who would have anticipated the container revolution, for instance? Or predicted the dominant language for machine learning and AI would be Python? To that end, it’s always worth keeping an eye on the future, since cross-platform deployment is fast becoming a must-have feature.


The Connected Revolution: How Integrated Intelligence is Reshaping Drug Development

CI and end-to-end quality are dismantling traditional silos and fostering a seamless, data-driven ecosystem. The use of CI, potentially with data lakes as a way of consolidating vast amounts of data from disparate sources, removes silos that exist between independent systems sitting with siloed departments. The movement of data, for example clinical data that is needed in regulatory submissions, or safety data that is needed alongside regulatory data for regulatory reports, brings a level of fluidity to data management and helps companies optimize time and resources to generate product quality and safety insights. ... For clinical trials, CI and end-to-end quality can significantly enhance patient recruitment and retention. Advanced analytics can identify suitable candidates more efficiently, while real-time monitoring through connected devices can provide continuous data on patient responses and the identification of potential adverse events. This improves the quality of data collected, enhances patient safety and reduces trial time and cost. ... CI and AI-driven regulatory intelligence, in the context of quality-controlled procedures, can support the gathering of global submission requirements and the creation of global submission content, which will then be subject to human review as part of QC.



Quote for the day:

"A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves." -- Laotzu

No comments:

Post a Comment