Daily Tech Digest - December 29, 2023

5 Ways That AI Is Set To Transform Cybersecurity

Cybersecurity has long been notoriously siloed, with organizations installing many different tools and products, often poorly interconnected. No matter how hard vendors and organizations work to integrate tools, coalescing all relevant cybersecurity information into one place remains a big challenge. But AI offers a way to combine multiple data sets from many disparate sources and provide a truly unified view of an organization’s security posture, with actionable insights. And with generative AI, gaining those insights is so easy, a matter of simply asking the system questions such as “What are the top three things I could do today to reduce risk?” or “What would be the best way to respond to this incident report?” AI has the potential to consolidate security feeds in a way the industry has never been able to quite figure out. Generative AI will blow up the very nature of data infrastructure. Think about it: All the different tools that organizations use to store and manage data are built for humans. Essentially, they’re designed to segment information and put it in various electronic boxes for people to retrieve later. It’s a model based on how the human mind works.


Microservices Resilient Testing Framework

Resilience in microservices refers to the system's ability to handle and recover from failures, continue operating under adverse conditions, and maintain functionality despite challenges like network latency, high traffic, or the failure of individual service components. Microservices architectures are distributed by nature, often involving multiple, loosely coupled services that communicate over a network. This distribution often increases the system's exposure to potential points of failure, making resilience a critical factor. A resilient microservices system can gracefully handle partial failures, prevent them from cascading through the system, and ensure overall system stability and reliability. For resilience, it is important to think in terms of positive and negative testing scenarios. The right combination of positive and negative testing plays a crucial role in achieving this resilience, allowing teams to anticipate and prepare for a range of scenarios and maintaining a robust, stable, and trustworthy system. For this reason, the rest of the article will be focusing on negative and positive scenarios for all our testing activities.


Skynet Ahoy? What to Expect for Next-Gen AI Security Risks

From a cyberattack perspective, threat actors already have found myriad ways to weaponize ChatGPT and other AI systems. One way has been to use the models to create sophisticated business email compromise (BEC) and other phishing attacks, which require the creation of socially engineered, personalized messages designed for success. "With malware, ChatGPT enables cybercriminals to make infinite code variations to stay one step ahead of the malware detection engines," Harr says. AI hallucinations also pose a significant security threat and allow malicious actors to arm LLM-based technology like ChatGPT in a unique way. An AI hallucination is a plausible response by the AI that's insufficient, biased, or flat-out not true. "Fictional or other unwanted responses can steer organizations into faulty decision-making, processes, and misleading communications," warns Avivah Litan, a Gartner vice president. Threat actors also can use these hallucinations to poison LLMs and "generate specific misinformation in response to a question," observes Michael Rinehart, vice president of AI at data security provider Securiti. 


Cybersecurity teams need new skills even as they struggle to manage legacy systems

To stay ahead, though, security leaders should incorporate prompt engineering training for their team, so they can better understand how generative AI prompts function, the analyst said. She also underscored the need for penetration testers and red teams to include prompt-driven engagements in their assessment of solutions powered by generative AI and large language models. They need to develop offensive AI security skills to ensure models are not tainted or stolen by cybercriminals seeking intellectual property. They also have to ensure sensitive data used to train these models are not exposed or leaked, she said. In addition to the ability to write more convincing phishing email, generative AI tools can be manipulated to write malware despite limitations put in place to prevent this, noted Jeremy Pizzala, EY's Asia-Pacific cybersecurity consulting leader. He noted that researchers, including himself, have been able to circumvent ethical restrictions that guide platforms such as ChatGPT and prompt them to write malware.


The relationship between cloud FinOps and security

Established FinOps and cybersecurity teams should annually evaluate their working relationship as part of continuous improvement. This collaboration helps ensure that, as practices and tools evolve, the correct FinOps data is available to cybersecurity teams as part of their monitoring, incident response and post-incident forensics. The FinOps Foundation doesn't mention cybersecurity in its FinOps Maturity Model. But, in all rights, FinOps and cybersecurity collaboration indicates a maturing organization in the model's Run phase. Ideally, moves to establish such collaboration should show themselves in the Walk stage. ... Building a relationship between the FinOps and cybersecurity teams should start early when an organization chooses a FinOps tool. A FinOps team can better forecast expenses, plan budget allocation and avoid unnecessary costs by understanding security requirements and constraints. These forecasts result in a more cost-effective and financially efficient cloud operation, so plan for some level of cross-training between the teams.


What is GRC? The rising importance of governance, risk, and compliance

Like other parts of enterprise operations, GRC comprises a mix of people, process, and technology. To implement an effective GRC program, enterprise leaders must first understand their business, its mission, and its objectives, according to Ameet Jugnauth, the ISACA London Chapter board vice president and a member of the ISACA Emerging Trends Working Group. Executives then must identify the legal and regulatory requirements the organization must meet and establish the organization’s risk profile based on the environment in which it operates, he says. “Understand the business, your business environment (internal and external), your risk appetite, and what the government wants you to achieve. That all sets your GRC,” he adds. The roles that lead these activities vary from one organization to the next. Midsize to large organizations typically have C-level executives — namely a chief governance officer, chief risk officer, and chief compliance officer — to oversee these tasks, McKee says. These executive lead risk or compliance departments with dedicated teams.


Revolutionising Fraud Detection: The Role of AI in Safeguarding Financial Systems

Conventional fraud detection methods, primarily rule-based systems, and human analysis, have proven increasingly inadequate in the face of evolving fraud tactics. Rule-based systems, while effective in identifying simple patterns, often struggle to adapt to the ever-changing landscape of fraud. Fraudsters have stronger motivation and they evolve faster than the rules in the rules engine. ... The same volumes of data that are overwhelming for traditional fraud detection systems are fuel for AI. With its ability to learn from vast amounts of data and identify complex patterns, AI is poised to revolutionize the fight against fraud. ... While AI offers immense potential, it’s crucial to acknowledge the challenges associated with its adoption. Data privacy concerns, ethical considerations around algorithmic bias, and the need for robust security measures are all critical aspects that demand careful attention. As AI opens new frontiers in fraud prevention, unregulated AI technology such as deepfake in the wrong hands could also enable sophisticated impersonation scams. However, the benefits of AI far outweigh the challenges. 


API security in 2024: Predictions and trends

The rapid rate of change of APIs means organizations will always have vulnerabilities that need to be remediated. As a result, 2024 will usher in a new era where visibility will be a priority for API security strategies. Preventing attackers from entering the perimeter is not a 100% foolproof strategy. Whereas having real-time visibility into a security environment will enable rapid responses from security teams that neutralize threats before they impact operations or extract valuable data. ... With the widespread use of APIs, especially in sectors such as financial services, regulators are looking to encourage transparency in APIs. This means data privacy concerns and regulations will continue to impact API use in 2024. In response, organizations are becoming weary of having third parties hold and access their data to conduct security analyses. We expect to see a shift in 2024 where organizations will demand running security solutions locally within their own environments. Self-managed solutions (either on-premise or private cloud), eliminate the need to filter, redact, and anonymize data before it’s stored.


The Terrapin Attack: A New Threat to SSH Integrity

Microsoft’s logic is that the impact on Win32-OpenSSH is limited This is a major mistake. Microsoft’s decision allows unknown server-side implementation bugs to remain exploitable in a Terrapin-like attack, even if the server got patched to support “strict kex.” As one Windows user noted, “This puts Microsoft customers at risk of avoidable Terrapin-style attacks targeting implementation flaws of the server.” Exactly so. You see, for this protection to be effective, both client and server must be patched. If one or the other is vulnerable, the entire connection can still be attacked. So to be safe, you must patch and update both your client and server SSH software. So, if you’re Windows and you haven’t manually updated your workstations, their connections are open to attack. While patches and updates are being released, the widespread nature of this vulnerability means that it will take time for all clients and servers to be updated. Because you must already have an MITM attacker in place to be vulnerable, I wouldn’t go spend the holiday season worrying myself sick. I mean, you’re sure you don’t already have a hacker inside your system, right? Right!?


Supporting Privacy, Security and Digital Trust Through Effective Enterprise Data Management Programs

Those professionals responsible for supporting privacy efforts should therefore prioritize effective enterprise data management because it is integral to safeguarding individual’s privacy. A well-structured data management framework works to ensure that personal information is handled ethically and compliant with regulations, while fostering a culture of responsible data stewardship within organizations. When done right, this reinforces trust with stakeholders, serves as a differentiator in the marketplace, improves visibility into data ecosystems, expands reliability of data, and optimizes scalability and innovative go to market efforts. ... Most, if not all, of the global data privacy laws and regulations require data to be managed effectively. To comply with these laws and regulations, organizations must first understand the data they collect, the purposes for its collection, how it is used, how it is shared, how it is stored, how it is destroyed, and so on. Only after organizations have a full understanding of their data ecosystem can they begin to implement effective controls to both protect data and preserve the ability of the data to achieve intended operational goals.



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest - December 28, 2023

CISO: Top 10 Trends for 2024

Mike highlighted recent legal cases involving CISOs, expressing concern about the unprecedented accountability of security professionals and the potential for them to be scapegoated. He discussed cases like Joe Sullivan at Uber and Tim Brown at SolarWinds, emphasizing the SEC's issuance of a Wells Notice for a CISO, a first in history. Mike questioned the trend of holding CISOs responsible for issues beyond their control and predicted a continued exodus of CISOs from their roles due to perceived lack of support. Yogesh offered a contrasting view, suggesting that recent cases may serve as catalysts for elevating the role of CISOs and improving security programs. ... Nitin addressed the widespread reliance on third parties in today's technological landscape and the need for continuous due diligence beyond initial assessments. Nitin emphasized the importance of close coordination and regular conversations with key third-party providers, highlighting the significance of vendor management skills and understanding the scope of responsibilities. Yogesh brought up the concept of shared responsibility models inspired by the practices of AWS and Amazon, emphasizing the need for a prioritized and evolving approach to third-party risk management.


Why People Should Be at the Heart of Operational Resilience

Embracing the ethos of “you build it, you run it” isn’t necessarily a bad thing, but turning it into a fetish can easily lead us into a place where failures and faults become the responsibility of individuals. That’s not good for anyone, humans or technology. “If the resilience of a system depends on humans never making mistakes, then the system is really brittle,” Shortridge said. “Humanity’s success is because of our creativity and ability to adapt; it isn’t because we’re great at doing the same thing the same way every time, or can memorize 50 things on a checklist that we never forget.” Although DevOps is well-intentioned in attempting to break down barriers, it has arguably contributed to a broader organizational discomfort with failure — a desire to control and minimize risk. “Many organizations struggle with the existential angst of wanting to prevent anything bad from ever happening,” Shortridge claimed. This she added, is ultimately “an impossible goal … It’s a downward spiral where the fear of things going wrong results in a slower, heavier approach, which actually increases the likelihood of things going wrong – as well as hindering the ability to swiftly recover from failure.”
Managing vendor partners is not a “one-and-done” activity. That’s why technology is so crucial to keep this process from being a herculean effort and make continuous monitoring more realistic throughout every stage of the vendor lifecycle. As an example, consider the initial assessment stage, when companies invite vendors to bid or pitch their services. Security questionnaires should be required at this point, especially for prospects that would be gaining full access to systems. These questionnaires can be automated to start, while still allowing respondents to supplement responses or resources. It's also a good idea to require a security audit report to illuminate any gaps that would need to be addressed before a contract gets signed. Regardless of the size or influence of vendor prospects, companies should always do their due diligence when it comes to assessing risks to avoid easily preventable attacks. Companies should provide a contract to approved vendors that clearly outlines compliance expectations — including a timeline of how long they have to fix any issues identified in the earlier security audit. 


Getting the most from cloud parking

Cloud parking, a component of FinOps, is the practice of shutting down cloud resources when your business is not using them. For example, if you have a cloud server instance running on a service like EC2, turning the server off when it's not hosting an active workload is an example of cloud parking. Later, if you want to use the server again, you'd "unpark" it by starting the instance back up. Cloud parking is important because almost all cloud services charge, at least in part, based on total running time. By parking cloud resources that you're not actively using, you stop the pricing meter and avoid paying for resources you don't actually need. ... Most types of cloud data resources, such as databases or storage volumes, can't be shut off in the same way that compute resources can, so businesses end up paying for their data even after applications that interact with the data are no longer running. With a sophisticated toolset that allows you to convert between data storage types quickly, it's possible to minimize this cost. For instance, imagine you shut down an EC2 instance and want to stop paying for the EBS volume that the instance uses.


The secret to making data analytics as transformative as generative AI

Unstructured and ungoverned data lakes, often built around the Hadoop ecosystem, have become the alternative to traditional data warehouses. They’re flexible and can store large amounts of semi-structured and unstructured data, but they require an extraordinary amount of preparation before the model ever runs. ... “The power of GPUs allows them to analyze as much data as they want,” Leff says. “I feel like we’re so conditioned — we know our system cannot handle unlimited data. I can’t just take a billion rows if I want and look at a thousand columns. I know I have to limit it. I have to sample it and summarize it. I have to do all sorts of things to get it to a size that’s workable. You completely unlock that because of GPUs.” RAPIDS, Nvidia’s open-source suite of GPU-accelerated data science and AI libraries also accelerates performance by orders of magnitude at scale across data pipelines by taking the massive parallelism that’s now possible and allowing organizations to apply it toward accelerating the Python and SQL data science ecosystems, adding enormous power underneath familiar interfaces.


3 Strategies For Turning Uncertainty Into a Clear Path Forward

To stand up to uncertainty, you must start reframing it as an opportunity. For leaders, rapidly multiplying unknowns increases the pressure to rebuild and reimagine their businesses. Though the journey from uncertainty to clarity is formidable, addressing challenges with a lens of opportunity leads to more and better innovation. ... A focus on simplicity can help. Simplicity is about focusing on the right things rather than doing things right. It's about focusing on the fundamentals, such as customer needs, and simple but powerful questions, such as "what do they need?" that help you get to the core of a problem and ensure you're solving the right one. "Keep it simple" means focusing on the strongest growth opportunities and having the courage to get rid of efforts that don't move the needle. ... For many who have "grown up" in large, resource-rich corporate environments, there is an instinct to default to resources (e.g. budget, headcount) to solve problems. However, my research over 15 years has shown that constraints can help navigate uncertainty. How? By activating creativity and ingenuity and relying on existing resources rather than waiting for additional resources to get started.


AI Investments We Shouldn't Overlook

AI is not a product -- it’s an ever-growing cycle of data usage, and people can be a huge factor in its failure. This leads us back to trust, as most people don’t trust the technology or the leaders working to regulate it. According to Pew Research, 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence. Those concerns are particularly strong in communities historically underrepresented in the design and deployment of technology. Meaningful participation including communities of users, impacted, as well as creators will improve ethical inquiry, help reduce harmful biases, and build confidence in AI’s fairness. To help allay these concerns, we need to have “seats at the table” for people with broader domain expertise. This is especially vital in areas such as health, finance, and law enforcement where bias has existed historically and is still a serious concern. Additionally, we should consider funding the National Science Foundation’s National AI Research Resource Task Force, and similar efforts, to reduce the economic barriers of entry into AI professions.


How to turn shadow IT into a culture of grassroots innovation

Balancing innovation with IT control remains necessary. Cybersecurity, including privacy and data protection, is considered the top business risk by corporate leaders. Your organization’s risk tolerance will depend on its culture, customers, and industry. Many aspects of security will be non-negotiable, but many can be solved by listening to users and evolving how you use platforms and services. One of the main risks associated with shadow IT is being blind to where company data lives. Without control, you can’t apply consistent policies. Let teams know why security processes are necessary and which standards any platform or tool must meet. Work to understand the business purpose of the adoption so you can help them find an alternative if their initial choice doesn’t meet those standards. The goal is to help users make intelligent security decisions – or help them behave securely by default – while enabling them to take advantage of technology that enhances their work. For example, by adopting a single sign-on solution with multi-factor authentication, you can solve access issues and give people a wider choice of apps and services while maintaining centralized visibility.


Unstructured Data Management Predictions for 2024

Data is increasingly in motion as IT needs to leverage new storage technologies and satisfy new business requirements. Enterprise data migrations of unstructured file and object data have long been complex and too manual and often require professional services. Automation and AI tools will change this, enabling intelligent, efficient data migrations that no longer need IT managers to babysit them and they will also be adaptive. Modern tools will know how to solve problems on the fly and self-remediate and will be able to recommend optimal storage tiers for different unstructured data workloads and use cases. This is a timely development, as data migrations are becoming more varied all the time and dependent upon the customer's changing environment — from firewall to network connections to security configurations. ... Unstructured data management will deliver affordable resiliency at a fraction of the cost, by creating cheap copies in durable object storage in the cloud for non-critical data — which is the bulk of all data in storage. This "poor man's data resiliency" approach will complement the 3x backup method for mission-critical data to create a cost-effective and holistic disaster recovery strategy.


ChatGPT can cough up sensitive information, raises privacy concerns

While catastrophic forgetting is supposed to bury old information as new data is added, researchers from Indiana University (IU) Bloomington have found that memories of these large language models (LLMs) can be jogged, posing privacy risks. According to a New York Times report, graphics editor Jeremy White was informed that his email address was procured via ChatGPT by an IU Ph.D. candidate, Rui Zhu. Zhu and his team were able to obtain White's and those of over 30 NYT employees from GPT-3.5 Turbo, an LLM from OpenAI. ... Speaking to the Daily Mail, AI expert Mike Wooldridge warned that confiding in ChatGPT about personal matters or opinions, such as work grievances or political preferences, could have consequences, reported The Guardian. Sharing private information with the chatbot may be "extremely unwise" as the revealed data contributes to training future versions. Wooldridge emphasizes that users should not expect a balanced response, as the technology tends to "tell you what you want to hear." He also dismissed the idea that AI possesses empathy or sympathy and cautions users that anything shared with ChatGPT may be used in future versions, making retractions nearly impossible.



Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous

Daily Tech Digest - December 27, 2023

Artificial ethics: Programmed principles or cultivated conviction?

Are AI systems developing generalisable ethical principles? Evidence suggests limited abilities to contextually apply concepts like privacy rights and informed consent. Or is ethical behavior just pattern recognition of scenarios labeled “unacceptable” by training data? Risk of overreliance on surface-level input/output mapping without philosophical grounding. Compare this rules-based approach to the human internalisation of ethical frameworks tied to justice, rights, duties, and harms. ... Their reasoning happens within limited data slices. This opacity around applied judgment represents a major trust gap. We cannot investigate when AI should make independent decisions in ethically ambiguous areas versus defer to human oversight due to understandable limitations in their moral literacy. Bridging this chasm requires architecting comprehensive ethical infrastructure across data sourcing, model design, and product applications. Ethics must permeate the entirety of systems, not follow as an afterthought. Careful scrutiny into reasoning behind AI choices can uncover areas for instilling principled priorities over transitory rules.


‘Merchants of Complexity’: Why 37Signals Abandoned the Cloud

With this ease of cloud computing comes a certain loss of independence. When a cloud provider suffers a massive outage, the customers are helpless to do anything for their own users. Hightower and DHH recalled a series of outages on the Google Cloud Platform that was so bad, it spurred 37Signals to move everything over to AWS. “The sense of desperation you feel when everything is out of your control, and there’s literally nothing we can do in the moment to fix it, is just so disheartening,” DHH said. And moving a workload, and its associated data, from one cloud to another is far from a trivial, or inexpensive task. DHH noted that it cost 37signals “hundreds of thousands of dollars” to move 6 to 7 petabytes of data from GCP, due to egress costs. “This whole idea that the cloud is going to give you mobility was not really true,” DHH said. ... DHH related how you can see $600,000 of Dell servers, out there on a loading dock somewhere. Whereas with the cloud, you are never sure where the money goes. You can click a button to spin up an authorization service, forget about it and let it run up thousands of dollars in monthly charges on the corporate account.


If you don’t already have a generative AI security policy, there’s no time to lose

Over time, security teams have tried to reign in shadow IT with policies that mitigate the plethora of risks and challenges it has introduced, but many remain due to its scale. Figures from research firm Gartner revealed that 41% of employees acquired, modified, or created technology outside of IT's visibility in 2022, while 2023 shadow IT and project management survey from Capterra found that 57% of small and midsized businesses have had high-impact shadow IT efforts occurring outside the purview of their IT departments. Although generative AI is quite a different thing, it's taking off far quicker than shadow IT did. The lesson is that security-focused policies should be put in place in the early stages as new technology use grows and not after it reaches an unmanageable scale. Adding to the pressures are the potential security risks generative AI can insert into businesses if unmanaged, which are very much still being understood. ... The problem is that most organizations, regardless of size or industry, are experiencing the same challenge around how to control and manage the secure use of generative AI, Thacker says. 


The Silver Bullet Myth: Debunking One-Size-Fits-All Solutions in Data Governance

Customized Data Governance frameworks streamline Data Management processes, allowing them to better align with specific organizational workflows. This alignment drives an increase in the overall efficiency of operations and reduces redundancies, saving both time and resources. The result – minimizing errors, making the ship run more smoothly, and cost savings – is a complete win-win scenario. Effective Data Governance is also an instrumental factor for managing risks such as breaches and misuse. Customized frameworks provide organizations with enough space to put together robust mechanisms for identifying, assessing, mitigating, and ultimately dealing with risks in a way that is tailored to the specific risk landscape in question. Another thing the proponents of the silver bullet approach disregard is the need for solutions for protecting rapidly moving data, as with same day ACH transfers, messaging apps, and real-time video call apps such as Zoom and Google Meet. As organizations evolve, so, too, do their Data Governance needs. Customized frameworks are scalable and adaptable, accommodating changes as the organization grows, enters new markets, or adopts new technologies.  


CIOs Battle Growing IT Costs with Tools, Leadership

CIOs can optimize IT spend by implementing more rigorous, strategy-aligned software approval processes aimed at avoiding duplicative spend and ensuring contracts are rightsized for the business needs. “The challenge and responsibility of CIOs is to be intentional with every dollar and investment by keeping the organization focused on the most important priorities instead of pursuing every exciting new idea,” she says. Mandell says IT leaders should encourage a culture of innovation and ideation, but they must also balance maintaining a strategic focus -- and communicate these goals across their own team and other areas of the organization. ... “Bridging the finance and engineering functions is hard work and you need both a team and a platform to effectively accomplish this,” she says. “When you do, you will be able to ensure and show that your cloud costs are being effectively managed.” ... When integrated effectively, AI-powered solutions can enhance decision-making and identify opportunities for optimization. “With the help of emerging technologies, CIOs and other budget decision makers will have greater visibility into spend, helping ensure resources are allocated strategically and IT environments are streamlined,” Mandell explains.


CIOs in financial services embrace gen AI — but with caution

AI is not the future of financial services — it’s the present. Genpact, a major business and technology services company that assists banks such as JP Morgan and Goldman Sachs, is already utilizing AI. “It’s really good at summarising, filling in blanks, and connecting dots, so generative AI is fit for purpose,” says Brian Baral, global head of risk at Genpact. “We’ve been able to leapfrog and do in months what had taken three years, but the data is key. Banks have to get ready to take the step forward.” Conscious of the recent history of disruption to financial services, the sector’s technology leaders are already looking for opportunities in AI. “Generative AI is starting off a new age of exploration in IT,” says Frank Schmidt, CTO at insurance firm Gen Re. Cugini at KeyBank agrees, and adds that the exploration has to include a cross-functional team from all areas of the business, not just IT. “We also pulled in some experts from Microsoft and Google to really understand what AI means to our sector.” Schmidt sees AI as having potential in process automation, particularly underwriting submissions. “AI will play a role in this workflow and classifying information,” he says.


NASA Releases First Space Cybersecurity Best Practices Guide

The guidance urges public and private sector organizations conducting space activities to establish a continuous process of mission security risk analysis and risk response in order to routinely identify and address security risks related to specific operations. NASA also advises organizations to apply the principles of domain separation and least privilege designs across their enterprises to better mitigate supply chain attacks and other operational vulnerabilities. Misty Finical, deputy principal adviser for enterprise protection at NASA, said the guidance "represents a collective effort to establish a set of principles that will enable us to identify and mitigate risks and ensure continued success of our missions, both in Earth's orbit and beyond." Reports detail a variety of challenges that organizations have faced in recent years while responding to emerging cybersecurity threats in space. A 2019 Government Accountability Office assessment found that the Department of Defense had struggled to adopt new approaches to protect U.S. satellites from cyberattacks by foreign adversaries and from the increasing threat of space debris.


How to incorporate human-centric security

The concept of human-centric security focuses on better management of the insiders that either inadvertently or maliciously cause so many of the threats that companies must deal with. Gartner recommends reducing friction caused by security strategies and starting to manage security risk. A human-centric approach to security not only takes the burden of security off the employee, it starts to look at the overall risk associated with certain behaviors and on improving the experience of employees. One way to look at this is as a trade-off. Allowing people to work remotely, for example, carries a certain security risk that needs to be weighed against the benefits of giving employees flexibility. However, another important way to look at risk is to analyze the behaviors that are most likely to lead to future threats and determine new ways to mitigate those risks to reduce future threats. By using insider risk management software, companies can better understand new work patterns of remote employees, track negative sentiment and flag access to sensitive data to proactively improve the company’s overall cybersecurity and employee experience.


AI: A Data Privacy Ally?

We can expect to see new technologies created to address the security and data privacy concerns in an AI world. Imagine consumers getting their own “AI Consent Assistant.” Such a tool would move us from static, one-time consent checkboxes to dynamic, ongoing conversations between consumers and platforms, with the AI Consent Assistant acting as a personal guardian to negotiate on our behalf. Or maybe AI tools could be developed to help security teams predict privacy breaches before they happen or proactively auto-redact sensitive information in real-time. We must think differently about AI in relation to data privacy – the future of data is not about how much we collect, but how ethically it is used and how we can realistically safeguard it so that we get the best out of AI without violating data privacy tenets. ... Transparency should never be a question– no one has to guess at what data is collected, why, how it is stored, or how to remove it. Before launching any new technology or platform, companies should assess the privacy impact, working to identify potential privacy issues and taking preventive measures from the start, as it remains quite difficult to retrofit privacy.


Security And Market Adoption Of Open Banking

With regard to the first element that ensures security, the European Banking Authority drafted regulatory technical standards for strong customer authentication in 2016. As specified by PSD2, strong authentication must rely on at least two key elements that are independent of one another. This is to ensure the disclosure or theft of one authentication element does not affect the overall security. ... As for the second element of security mitigation, the communication channel between third-party providers and banks, PSD2 paved the way for regulated application programming interfaces. The interface must allow third-party providers to identify themselves with banks when requesting access to accounts. This outcome establishes requirements and responsibilities that prevent third-party providers from using expired certificates, or not having them at all, when fetching data or transmitting a payment order. ... Building trust in open banking is an essential step toward achieving widespread adoption as well. Companies can share real-life examples, such as case studies and testimonials. These are powerful ways to showcase the benefits of open banking and building trust with customers. 



Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki

Daily Tech Digest - December 26, 2023

Generative AI is forcing enterprises and policymakers to rewrite the rules of cybersecurity

In a sense, creativity is the new hacker’s currency; it’s used to craft and execute attacks that traditional cybersecurity measures fail to detect and prevent. With 72 percent of white hat hackers believing they’re more creative than AI, it’s safe to assume that bad actors with similar skill sets only need a few creative muscles to cause material problems at scale. From persistent nagging to creative wordplay, hackers can trick an AI model to perform unintended functions and reveal information otherwise meant to be guarded. These prompts don’t need to be complex, and bad actors are constantly exploring new methods to get generative AI models to spill their secrets. The threat landscape for companies innovating with AI just got a lot more complex. So what should we do about it? Just like there are various ways to express a message in English, the same goes for LLM hacks. There are countless different ways to get an AI model to produce toxic or racist content, expose credit card information, or espouse misinformation. The only way to effectively protect AI apps from this volume of attack vectors is with data. A lot of it. Safeguarding against AI threats requires extensive knowledge of what those threats are. 


Rejection Doesn't Have to Be a Bad Thing. Here's How You Can Use It as a Tool for Success.

Pain is inevitable, but suffering is optional. Recognize that you have a choice in how you feel about rejection. Whatever story you tell yourself about rejection comes from you. It's up to you to interpret the information that exists in your world. You have the power to flip the script, change the narrative and tell yourself a different story. You can choose to view rejection as a good thing — it means you put yourself out there, asked a tough question and exuded courage. It means you got out of your comfort zone, which always helps us grow and evolve. It means you got to practice a skill (the skill of asking, influencing or selling). That practice will help you grow thicker skin and hone your craft, making you stronger and tougher. With that in mind, you can choose to view rejection as a good thing. ... Once you've been rejected and know why, you can adjust your strategy. You might learn that making calls at lunch time isn't effective because no one answers the phone. You might learn you've been targeting the wrong demographic and need to pick different prospects. You might learn prospecting on the weekdays isn't as effective as prospecting on weekends. 


You should be worried about cloud squatting

The core issue is that cloud asset deletions often occur without removing associated records, which can create security risks for subdomains. Failure to also delete records allows attackers to exploit subdomains by creating unauthorized phishing or malware sites. This is called cloud squatting. Resources are provisioned and deallocated programmatically, typically. Allocating assets such as virtual servers and storage space is quick, generally done in seconds, but deallocation is more complex, and that’s where the screwups occur. ... To mitigate this risk, the security teams design internal tools to comb through company domains and identify subdomains pointing to cloud provider IP ranges. These tools check the validity of IP records assigned to the company’s assets. These are assigned automatically by cloud providers. I always get nervous when companies create and deploy their own security tools, considering that they may create a vulnerability. Mitigating cloud squatting is not just about creating new tools. Organizations can also use reserved IP addresses. This means transferring their owned IP addresses to the cloud, then maintaining and deleting stale records, and using DNS names systemically.


Great business partners drive great business performance

A core part of the finance team is also risk management and the ability to say “No”. Too often finance teams say “No”. My boss says that a CFO is many times a CF-No. There are two aspects here. To use football parlance, a CFO has not just to keep score but to score goals. This means that finance teams have to enable risk-taking. No risk, no gain. Capital allocation and building resilience to take measured risks is a critical function of the CFO. In a VUCA world, understanding the risks is a critical imperative. The Covid pandemic and the Ukraine war have resulted in significant supply chain risks. The rapid pace of digitisation, AI, and other developments threaten business models. Making sense of the developments and allowing strategic choices to develop, capital allocation to be done and monies invested is now engaging CFOs significantly. The CFO, at times, has to be a CF-No. Exercising this has to be done very carefully. Done too often, the finance teams become a blocker. But if the teams have done enough to provide insights, engage well with the business, and develop trust – saying a “No” is accepted as sage advice by the business. Getting this right is an art. But this is something that finance teams now have to constantly work on.


How the new Instegogram threat creates liability for organizations

Under Section 230 of the Communications Decency Act (CDA), companies that offer web-hosting services are typically shielded from liability for most content that customers or malicious users place on the websites they host. However, such protection may cease if the website controls the information content. A company that uses a social media network to create the picture or develop information would arguably control that information and thus may not be immune. That is, if a service provider is "responsible, in whole or in part, for the creation or development of the offending content," its actions could fall outside the CDA's protections. Whether the CDA protections extend to damage caused by malware is still largely an open question of law. Companies could therefore be liable for third-party damage resulting from an Instegogram attack, even if they did not know the digital image was infected. As no statutory immunities exist to shield social media users, a company could be liable for any resulting damage caused by a criminal hacker's embedded command-and-control infrastructure. 


The only CIO resolution that matters

CIOs need to own, or at the very least contribute substantively to, the overarching narrative regarding IT’s business context — what is going on, what has gone right, what has gone wrong, which technology developments require action, and so on. Ideally, the office of the CIO would brief the enterprise on a systemic basis on these matters. I am thinking of something similar to the President’s Daily Brief. Four critical decisions need to be made to establish such a brief: what form should the briefing to take, what subject areas should we keep an eye on, which constituencies need to be briefed, and what time frame should we use. Once those decisions have been made, a systemic program of insight capture must also be instituted for the briefings to be effective. Such a learning process — observe, orient — can’t be left to chance. The CIO could assign an individual or set of deputies responsible for enumerating and sharing targeted insights to critical constituencies on a daily, weekly, or monthly basis. This knowledge wrangling — and ignorance vanquishing — operation could work at a departmental or group level and rotate around the staff on an episodic basis.


A lifecycle is a thing that exists from beginning to end. A product lifecycle lasts from inception to decommission. A production lifecycle is a regular way of producing products. A business lifecycle may go from order to ship. A lifecycle is often not even a thing itself, but just the process a thing goes through. I am aging each year. ... Architects build things. We are creative professionals. We make cathedrals. We make solutions. We make better business outcomes in particular ways. I have never seen a group of people who so deeply care about the products they create. Delivery of an outcome based on a business and technology strategy is 90% of architects jobs in practice. This means we touch a LOT of lifecycles. And that is why we must be so very good at navigating them. At discussing pros/cons without religious belief, even without emotion. Building beautiful things is its own reward. How we get there is part of the price we pay to do it. The actual architecture lifecycle then is the method used by ALL of the architects in the practice to deliver value to the organization! But beware, this means all of them. You can’t put business architects in one stack and solution in another and enterprise in a third.


The Elusive Quest for DevSecOps Collaboration

While the concept of DevSecOps has been discussed for years as a best practice for integrating security into development lifecycles, actual adoption has been gradual at best. As Or Shoshani, CEO of cloud security provider Stream Security, explains, "In most of the organizations that we have been working with and exposed to, the SecOps and DevOps are still being separated into two different groups." The reality is that despite widespread consensus on the need for closer collaboration between security and development teams, real-world progress has lagged. Shoshani attributes this to the constant tension between an exciting vision and on-the-ground implementation realities. Just as with past innovations like multi-cloud, he notes, "Everybody talks about it, but the industry isn't ready." Systemic culture shifts take patience. What's behind this lagging evolution? Incumbent challenges around processes, mindsets, and communication persist. Groups accustomed to working in silos and throwing issues "over the wall" resist new rhythms. Security teams are trying to validate each release phase before the next begins trip up accelerated development timetables. And without air cover from leadership, there's little incentive to try.


Building A Secure Foundation: Embracing Best Practices For Coding

Not sure where to start when investing in secure coding practices? Begin with these tips: Organizations should provide developers with comprehensive training on secure coding practices. This training should cover topics such as common vulnerabilities, mitigation techniques and security tools; Organizations should invest in static analysis tools to help developers identify and address vulnerabilities in their code. These tools can automate the process of detecting security flaws, saving developers time and effort; Organizations should create a culture of security awareness within the development team. This culture should encourage open communication about security concerns and promote a shared responsibility for building secure software applications; Developers should stay up to date on the latest security threats and vulnerabilities, which can be achieved by reading security blogs, attending conferences and participating in online forums; Developers should utilize security tools and resources to identify and address potential security flaws in their code. These tools can include static analysis tools, code review tools and security libraries.


From Compliance-First to Risk-First: Why Companies Need a Culture Shift

A paradigm shift is undеrway as businеssеs еvolvе – transitioning from a traditional "Compliancе-First" approach to a more dynamic and forward-thinking "Risk-First" mindset. This cultural shift rеcognizеs that compliancе, whilе еssеntial, should not bе viеwеd in isolation but as an intеgral componеnt of a broadеr risk managеmеnt strategy. This еvolution is not mеrеly a concеptual adjustmеnt but a pragmatic nеcеssity, as organizations sееk to proactivеly idеntify, undеrstand, and mitigatе risks, еnhancing thеir rеsiliеncе and adaptability in an еvеr-changing businеss еnvironmеnt. This еxamination divеs into the importance of companies adopting a cultural transformation. This shift involves shifting from a narrow еmphasis solely on compliancе to a broad and morе stratеgic еmbracе of risk. Bеyond mеrе obligation, this shift fostеrs a culturе that mееts rеgulatory rеquirеmеnts and positions organizations to thrivе amidst uncеrtainty, bolstеring thеir long-tеrm sustainability as wе еxplorе thе complеxitiеs of this changе, wе uncovеr thе fundamеntal connеction bеtwееn compliancе and risk.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - December 25, 2023

Technical Debt is Killing Your Business: How a PLM Strategy Helps

Many organizations implicitly tolerate technical debt as a necessary investment to adapt to changing circumstances or swiftly seizing new opportunities. Successful businesses stress the importance of managing technical debt through acceptance, measurement and proactive strategies, including the adoption of open standards, abstraction and incremental changes. ... Defining and adopting an effective PLM strategy is instrumental in managing technical debt comprehensively. A 2020 McKinsey study titled “Tech Debt: Reclaiming Tech Equity” highlighted the importance of strategic alignment, stating that, “A degree of technical debt is an unavoidable cost of doing business, and it needs to be managed appropriately to ensure an organization’s long-term viability." Furthermore, the study emphasized that “the goal is not to reach zero technical debt. That would involve devoting all resources to remediation rather than building points of competitive differentiation. It would also make it difficult to expedite IT development when strategic or risk considerations require it. Rather, companies should work to size, value and control their technical debt and regularly communicate it to the business.”


Improving the case for waste from data centers

The challenge originally stems from the practical complexities of collecting and harnessing residual heat from data centers. Planning authorities actively encourage heat reclamation, but the lack of existing infrastructure poses a significant obstacle. While planning conditions that mandate developers to allow for connections to ‘future’ heating networks is a positive move, this becomes futile where there is no corresponding plan for heat network development. Developers comply with the condition out of an obligation to meet regulatory requirements rather than in genuine expectation of the infrastructure ever being used. From the perspective of data center operators, investing in the infrastructure only makes sense when it generates Operational Expenditure (OpEx) savings through the reduced power and water consumption. However, the misalignment in load profiles complicates this matter. As the heating network’s demands peak in winter whilst reducing in summer, the data center operates the opposite way, as it can take advantage of ‘free cooling’ during the colder months. This misalignment in load profiles also impacts the ESCos.


The rise of observability and why it matters to your business

Automation is a two-edged sword. It’s one of those alluring concepts, but there’s real caution around trusting machines to judge what actions should and shouldn’t be taken and when. So given the sensitive nature of change management, we would expect this trend to continue to lean toward AI-led automation, but it will take some time before humans are mostly out of the loop. Moreover, while many vendors claim to have AI, there’s a wide spectrum of capabilities, and customers should be very cautious about vendor claims in this regard. Now, not surprisingly, the regulated industries of financial services, healthcare and government see a much lower tendency to be mostly-AI led in this context over the next year (well under 5% say mostly AI-led in this chart), whereas industries such as energy and high tech are much more likely to adopt AI aggressively in this space. Interestingly, the data show that senior managements are more likely to push for AI adoption whereas the practitioners, who literally have their jobs on the line – that is, machines replacing humans or getting fired for implementing rogue automation – are much less optimistic.


Innovate to elevate: Blueprint for business excellence in 2024 and beyond

The upcoming year promises an exciting development in the form of GenAI, which will be integrated into everyday applications such as search engines, office software, design tools, and communication platforms. This integration will reveal its full potential as a super-smart hyper-automation engine. With the ability to take over routine tasks, including information retrieval, scheduling, compliance management, and project organization, individuals will be able to boost their productivity and efficiency. As per a report, hyper-automation, combined with other technologies, can automate work activities that currently occupy 60-70% of employees’ time by 2024. This development offers immense value to sectors such as software engineering, R&D, customer operations, marketing, and sales, making it an indispensable part of the IT industry. In this rapidly evolving world, organizations are constantly searching for ways to enhance customer service and drive growth. One of the most promising ways to achieve this is by embracing hyper-automation technologies such as AI-powered tools, Natural Language Processing (NLP), chatbots, and virtual assistants. 


4 ways robotics, AI will transform industry in 2024

The future of manufacturing is intricately linked to IT/OT integration as data will underpin innovation and efficiency. Research shows that the manufacturing industry has been at the forefront of adopting cloud-based software services and we are already seeing some customers use these to enhance quality, cost efficiency, and predictability. That makes me confident that 2024 will see the growth of data-driven logistics and manufacturing systems. Many still have an outdated view of the cloud as merely being a data collector and backup function, as we know it from our private lives. But the real potential and power don’t lie in storing data or even in linking machines. The real transformative leap comes when cloud-based software services connect humans and machines and help manufacturers simplify complex processes and make smarter decisions. The benefits of this digital evolution are significant. Remote access to manufacturing data enables quick responses to issues and continuous automation improvement. With dynamic systems now essential, trusted cloud technologies offer the latest in security and state-of-the-art services.


Proper Data Management Drives Business Success

Organizations across industries are excited about generative artificial intelligence (AI) and large language models (LLMs), and for good reason. Tools like Chat GPT-4 have the potential to transform business and revolutionize how employees do their jobs, so it’s no surprise that many people are enthusiastic about implementing them within their organizations. However, LLMs are only as good as the data on which they are trained. If an organization’s data isn’t properly sorted, tagged, and secured, the addition of LLMs will not be nearly as transformative as business leaders hope. Nearly half (45%) of IT leaders admitted that ineffective and inefficient Data Management means they can’t leverage emerging technology such as generative AI, which can put them at a competitive disadvantage. IT leaders must holistically assess the state of their data practices before implementing generative AI. Only 13% of respondents reported that Data Management initiatives are their number one priority, so it’s unsurprising that 77% of the average U.S. company’s data is redundant, obsolete, or trivial (ROT) or dark data. 


Understanding the NSA’s latest guidance on managing OSS and SBOMs

In an effort to provide context and prioritization to downstream product and software consumers, the guidance recommends suppliers and developers adopt Vulnerability Exploitability eXchange (VEX) documents to help consumers and customers know which components are actually impacted by a vulnerability, which have been resolved, and what should potentially be addressed via compensating controls. The NSA also recommends suppliers and vendors adopt attestation processes to demonstrate the secure development of a product throughout the building, scanning, and packaging of product development and distribution. This of course is being led by industry efforts such as in-toto and SSDF and self-attestations when machine-readable artifacts are not generated and used. This helps provide assurance of not just the components of an end product but the security of the development process as well. To address vulnerabilities the NSA recommends using not just CVE and NVD but also other vulnerability databases such as OSV as well as vulnerability intelligence sources such as the CISA Known Exploited Vulnerability (KEV) catalog and Exploit Prediction Scoring System (EPSS).


5 Common Data Science Challenges and Effective Solutions

The upskilling and reskilling of existing data science experts aren’t limited to technical skills. Data science experts also need enhanced problem-solving and communication skills. With the massive amount of data now available come new challenges and problems that need to be addressed. The solutions to these problems need to be properly communicated to team members and management, who may or may not have the expertise to interpret data on their own. We’ll explore this in more detail later. To address the challenge of a smaller pool of data scientists relative to demand, you just need to stand out as a potential employer and attract some of those professionals who are part of that pool. So, offer competitive salaries and benefits. The average base pay for data scientists in the US is $146,422, according to Glassdoor, and if you can offer more, better. Whether you hire data scientists or already have data professionals as employees, you need to invest in data science workshops and training. These can help ensure your team’s data science skills are attuned to the times and consider current practices and standards in the data science industry.


How Observability Strengthens Your Company Culture

Observability breaks down silos and makes collaboration easier across different clouds, databases, and dashboards seamlessly. For example, an issue that the DevOps team discovers through observability might lead them to collaborate with the design team in a way they may never have before. Leaders should aim to do the same for their teams by fostering greater collaboration across the entire organization. A lack of effective collaboration and communication is the top cause of workplace failures, according to 86 percent of employees and executives. Just as observability is a step up from monitoring, collaboration is the output that evolves from transparent communication. Your head of accounting probably knows precisely where each decimal point needs to be within a spreadsheet and why it needs to be there. Can they say the same about the IT team’s technology stack or the sales team’s go-to-market plan? With a culture underpinned by collaboration, employees won’t just learn how to get along. They’ll understand why each cog in your machine functions the way it does, as well as the effect of their work on their fellow employees, the end product, and the business as a whole.


The Third-Party Threat for Financial Organisations

DORA requires financial entities to have robust contracts in place with ICT service providers. Financial organisations must also maintain a register of service providers and report on this to the competent authority every year. The key here is to manage risks. This includes managing the risk of having too many critical or important functions supported by a small number of service providers. In addition, DORA requires that financial entities only contract with providers that “comply with appropriate information security standards”. Where the ICT service provider supports critical or important functions, the financial entity must ensure the standards are “the most up-to-date and highest quality”. ... Unlike the GDPR (General Data Protection Regulation), DORA does not require that these standards be identified by a specific authority, so it’s reasonable to assume that ISO 27001 – since it sets the international benchmark for information security management – would qualify as such a standard. As Alan mentioned, certifications like ISO 22301 and Europrivacy™/® add further assurance, as do due diligence checks on suppliers’ resilience, particularly for critical suppliers.



Quote for the day:

"Innovation is taking two things that already exist and putting them together in a new way." -- Tom Freston

Daily Tech Digest - December 24, 2023

The emerging role of the chief resilience officer in BCDR

Chief resilience officer is a relatively new senior-level executive title and is still evolving. Responsibilities can include business continuity and disaster recovery (BCDR), incident response, cybersecurity, and risk management. The chief resilience officer might also be designated as the lead executive for crisis management activities. Chief resilience officers must ensure the organization can adapt and improve its operations so that future disruptive events are more effectively mitigated, resulting in minimal damage to the organization and its reputation. ... Preparing for and responding to disruptive events traditionally has been managed by a wide variety of job titles in an organization. Sometimes the role is part of the IT staff or disaster recovery team. Other times it can be part of administration, risk management, emergency management, human resources or facilities management. In medium to large organizations, the need for a central leadership role for these and related activities has become evident. ... Establishing a chief resilience officer reinforces the importance of BCDR activities across the entire organization.


Global securities body releases DeFi recommendations: Finance Redefined

Following its release, some community members worried about how it could “kill” DeFi, while others said it would not have a fatal effect. Apart from IOSCO’s move, China’s central bank also urged jurisdictions across the globe to regulate the DeFi space jointly. Meanwhile, the DeFi ecosystem flourished in the past week thanks to ongoing bullish market momentum, with most tokens trading in green on the weekly charts. IOSCO published nine recommendations for DeFi. The organization encourages consistency when it comes to regulatory oversight across jurisdictions worldwide. The new recommendations were a companion to the digital asset and crypto recommendations released in November. Furthermore, IOSCO released a note on how the two sets of recommendations can work hand in hand depending on the level of decentralization of regulated entities. ... Apart from IOSCO, the People’s Bank of China (PBoC) also pushed for joint DeFi regulation in its latest financial stability report. The central bank allotted a section to crypto assets in its report, underscoring the need for the industry to be regulated with joint efforts from various jurisdictions.


Unleashing power of language models in India’s IT landscape: The talking network revolution

The cornerstone of this transformative paradigm is the ability of language models to comprehend, analyze, and respond to user queries with human-like understanding. India, with its vast and diverse linguistic landscape, stands to benefit immensely from language models that can comprehend and respond in multiple languages. This linguistic versatility ensures that the Talking Network caters to the linguistic diversity of the Indian corporate environment, making it an inclusive and accessible solution for businesses across the country. One of the pivotal trends catalyzed by GAI and LLMs in India is the development of proactive and predictive IT maintenance tools. AIM Research predicts that, by 2024, 40 per cent of enterprise applications will embed conversational AI as a standard feature. Traditionally, IT maintenance has been a reactive process, addressing issues only when they arise. However, the Talking Network introduces a proactive dimension by leveraging predictive analytics and machine learning capabilities embedded in these language models. By analyzing historical data and identifying patterns, the network can foresee potential glitches and address them before they escalate into major disruptions.


Why Bill Gates Says AI Will Supercharge Medical Innovations

He cites an AI-powered tool under development at the Aurum Institute in Ghana that helps health workers prescribe antibiotics without contributing to antimicrobial resistance, where pathogens learn how to get past antibiotic defenses. The tool can comb through all the available information about antimicrobial resistance and suggest the best drug plan for a patient. ... Gates also sees AI assisting in education, calling AI education tools "mindblowing," as they are tailored to individual learners, and says they will "only get better." He's excited about how the technology can be localized to students in many different countries and cultural contexts. Not everything on Gates' mind is AI-related. He's concerned about climate change, saying he's "blown away by the passion from young climate activists," and hopeful that 2024 will see more investment in innovations that will help those who are most affected by the climate crisis. And he even plunges into the debate over nuclear energy. Gates notes that high-profile disasters such as Chernobyl in the 1980s and Three Mile Island in the late 1970s have spotlighted the risks, but over the past year, he's seen a shift towards acceptance.


How AI Is Transforming Industries

A critical sector of the Indian economy, agriculture contributes to 18 per cent of the GDP. And several new-age start-ups are emerging in this segment with the likes of CropIn, DeHaat, BharatAgri, and Bijak. These start-ups are helping develop innovative solutions for various aspects of agriculture, especially precision farming, supply chain management, and market linkages The 2019 start-up Fyllo presently has over 100 agronomy models to help farmers produce over 20 crops. It provides insights on growing crops based on climate or occurrences of diseases/pests on crops. Fyllo believes that both problems can be solved with accurate data, which led to them building a number and pathogens prediction models using AI now. "We use AI for multiple use cases at Fyllo. The first use case is predicting the weather. We use climate data from our devices and machine learning-based weather Models to come up with a highly precise farm-level weather prediction model. Another use case is getting crop health, and crop stage identification from satellite imagery. We use various machine learning models to do that. 


Magnetic Knots Push Future Computing Toward 3D

“In the last decades, electronics basically developed in the paradigm of two-dimensional systems,” says Nikolai Kiselev, a staff scientist at the Peter Grünberg Institute in Jülich, Germany. “Which from a certain point of view is absolutely reasonable because technologically it’s much easier to fabricate and maintain such devices. But if we look toward the future, most probably to make our devices the most efficient, at some point, we will have to turn towards a three-dimensional architecture. And that’s where the discovery we made in our paper might become useful. ... Although hopfions move around readily, other aspects of their computing potential is still uncertain. The team used transmission electron microscopy to image the hopfion, and measuring its location more efficiently is an outstanding problem. The team says they plan to look at how these objects respond to electric current, which could help detect and track them. Plus, precise details on the exact ways hopfions might encode information is still an open question. That said, Kiselev adds, many questions like this don’t yet have answers because there has been no reason to ask them. 


The Art Of Listening: Silent Communication In Leadership

Silence, first and foremost, is a medium of introspection and reflection. Leaders, constantly barraged by information and demands, may find themselves lost in a maze of noise—both external and internal. Silence offers a sanctuary, a space to step back and reflect. It allows leaders to process information, contemplate decisions, and align their actions with their core values and objectives. This introspective silence is not merely an absence of noise; it’s an active engagement with one’s thoughts, a deliberate pause to understand the bigger picture. Moreover, silence can be a powerful communication tool. It’s not just about the absence of speech; it’s about listening, understanding, and absorbing. ... Silence also plays a crucial role in conflict resolution and negotiation. In tense situations, a leader’s silence can de-escalate emotions and give everyone a moment to breathe and reassess. By not immediately responding to a provocation or a challenging statement, leaders can avoid knee-jerk reactions that might exacerbate the conflict. Instead, silence can be used to control the tempo of the conversation, allowing for thoughtful and measured responses that are more likely to lead to constructive outcomes.


2024 in laptops: it’s shaping up to be a big year for Windows

It’s the AI coprocessor inside that’s intriguing to me, particularly because Intel and Microsoft have both been dropping hints about a future version of Windows arriving soon and how “AI is going to reinvent how you do everything on Windows.” Rumors suggest that Windows 12 will include a large focus on AI and take advantage of the AI coprocessors that Intel is building into its Core Ultra chips. (Intel isn’t the only one: AMD also has its own Ryzen 7000 mobile processors that include a dedicated AI engine, and these types of neural processing unit (NPU) chips are common on Arm-powered Windows laptops.) Intel held an AI event to launch its Core Ultra chips this month, just ahead of the annual Consumer Electronics Show (CES), where we’ll see all of the new laptops that are powered by Intel’s new chips. Lenovo, MSI, Acer, and Asus are all launching laptops with these new chips inside. While Intel talked a lot about “AI everywhere,” the missing piece of the puzzle, a new AI-focused version of Windows, is still a mystery right now.


Chips To Compute With Encrypted Data Are Coming

At first glance, it might seem impossible to do meaningful computation on data that looks like gibberish. But the idea goes back decades, and was finally made possible in 2009 by Craig Gentry, then a Stanford graduate student. Gentry found a way to do both addition and multiplication without calculation-killing noise accumulating, making it possible to do any form of encrypted computation. One comparison you can use to understand FHE is that it’s analogous to a Fourier transform. For those of you who don’t remember your college signal processing, a Fourier transform is a mathematical tool that turns a signal in time, such as the oscillation of voltage in a circuit, into a signal in frequency. One of the key side effects is that any math you can do in the time domain has its equivalent in the frequency domain. So you can compute in either time or frequency and come up with the same answer. The genius of fully homomorphic encryption is that it uses lattice cryptography— a form of quantum-computer-proof encoding—as the mathematical transformation. The problem with this approach is that the transformation leads to a big change in the type and amount of data and in the sorts of operations needed to compute. That’s where the new chips come in.


Ransomware Attackers Abuse Multiple Windows CLFS Driver Zero-Days

CLFS is a high-performance, general-purpose logging system available for user- or kernel-mode software clients. Its kernel access makes it eminently useful for hackers seeking low-level system privileges, and its performance-oriented design has left a series of security holes in its wake in recent years, which ransomware actors in particular have pounced on. ... Nothing in particular changed about the CLFS driver this year. Rather, attackers seem to have just now identified what was wrong with it this whole time: It leans too far left in that inescapable, eternal balance between performance and security. "CLFS is perhaps way too 'optimized for performance,'" Larin wrote, detailing all of the various ways the driver prioritizes it over protection. "It would be better to have a reasonable file format instead of a dump of kernel structures written to a file. All the work with these kernel structures (with pointers) happens right there in the blocks read from disk. Because changes are made to the blocks and kernel structures stored there, and those changes need to be flushed to disk, the code parses the blocks over and over again every time it needs to access something."



Quote for the day:

"The signs of outstanding leadership are found among the followers." -- Max DePree