Daily Tech Digest - February 08, 2024

The do-it-yourself approach to MDM

If you’re comfortable taking on extra responsibilities and costs, the next big question is whether you can get the right tool — or more often, many tools — you need. This is where you need a detailed understanding of the mobile platforms you have to manage and every platform that needs to integrate with them for everything to work. MDM isn’t an island. It integrates with a sometimes staggering number of enterprise components. Some, like identity management, are obvious; others like log management or incident response are less obvious when you think about successful mobility management. Then there are the external platforms that need connections. Think identity management — Entra, Workspace, Okta — and things like Apple Business Manager that you need to work well in both every day and unusual situations. Then tack on the network, security, auditing, load balancing, inventory, the help desk and various other services. You’re going to need something to connect with everything you already have, or you could find yourself saddled with multiple migrations. 


NCSC warns CNI operators over ‘living-off-the-land’ attacks

The NCSC said that even organisations with the most mature cyber security techniques could easily fail to spot a living-off-the-land attack, and assessed it is “likely” that such activity poses a clear threat to CNI in the UK. ... In particular, it warned, both Chinese and Russian hackers have been observed living-off-the-land on compromised CNI networks – one prominent exponent of the technique is the GRU-sponsored advanced persistent threat (APT) actor known as Sandworm, which uses LOLbins extensively to attack targets in Ukraine. “It is vital that operators of UK critical infrastructure heed this warning about cyber attackers using sophisticated techniques to hide on victims’ systems,” said NCSC operations director Paul Chichester. “Threat actors left to carry out their operations undetected present a persistent and potentially very serious threat to the provision of essential services. Organisations should apply the protections set out in the latest guidance to help hunt down and mitigate any malicious activity found on their networks.” "In this new dangerous and volatile world where the frontline is increasingly online, we must protect and future proof our systems,” added deputy prime minister Oliver Dowden.


What Are the Core Principles of Good API Design?

Your API should also be idiomatic to the programming language it is written against and respect the way that language works. For example, if the API is to be used with Java, use exceptions for errors, rather than returning an error code as you might in C. APIs should follow the principle of least surprise. Part of the way this can be achieved is through symmetry; if you have to add and remove methods, these should be applied everywhere they are appropriate. A good API comprises a small number of concepts; if I’m learning it, I shouldn’t have to learn too many things. This doesn’t necessarily apply to the number of methods, classes or parameters, but rather the conceptual surface area that the API covers. Ideally, an API should only set out to achieve one thing. It is also best to avoid adding anything for the sake of it. “When in doubt, leave it out,” as Bloch puts it. You can usually add something to an API if it turns out to be needed, but you can never remove things once an API is public. As noted earlier, your API will need to evolve over time, so a key part of the design is to be able to make changes further down the line without destroying everything.


Russian Ransomware Gang ALPHV/BlackCat Resurfaces with 300GB of Stolen US Military Documents

The ALPHV/BlackCat ransomware group has threatened to publish and sell 300 GB of stolen military documents unless Technica Corporation gets in touch. “If Technica does not contact us soon, the data will either be sold or made public,” the ransomware gang threatened. However, there is no guarantee that the ransomware gang would not pass the military documents to adversaries even after the military contractor pays the ransom. The BlackCat ransomware gang also posted screenshots of the leaked military documents as proof, displaying the victims’ names, social security numbers, job roles and locations, and clearance levels. Other military documents include corporate information such as billing invoices and contracts for private companies and federal agencies such as the FBI and the US Air Force. So far, the motive of the cyber attack remains unknown, but it’s common for threat actors to feign financial motives to conceal their true geopolitical objectives. While the leaked military documents may not classified, they still contain crucial personal information that state-linked threat actors could use for targeting.


6 best practices for better vendor management

To build a stronger relationship with vendors, “CIOs should bring them into the fold regarding their priorities and potential concerns about what may —or may not — lie ahead, from a regulatory perspective or the general economic climate, for example,” says Kevin Beasley, CIO at VAI, a midmarket ERP software developer. “A few years ago, supply-chain snags had CIOs looking for new technology,” Beasley says. “Lately, a talent shortage means CIOs are pushing for more automation. CIOs that don’t delay posing questions about how vendor products can solve such challenges, but also take the time to hear the information, will build a valuable rapport that can benefit both parties.” Part of building a collaborative partnership is staying in close contact. It’s important to establish clear communication channels and schedule regular check-ins with active vendors, “to understand performance, expectations, and progress while recognizing that no process or service goes perfectly all the time,” says Patrick Gilgour, managing director of the Technology Strategy and Advisory practice at consulting firm Protiviti.


Three commitments of the data center industry for 2024

To become more authentic and credible in these reputation-building dialogues and go beyond the data center, we must be more representative of the people our infrastructure ultimately serves. Although progress has been made, we must keep evolving. We need diversity of background, experience, ethnicity, age, and outlook in order to fully embrace the challenges of digital infrastructure. The range of roles, skillsets, and opportunities in the sector is far wider than many outside the industry recognize. Creating organizations where every person can be themselves, and deliver in line with their ethics, values, and beliefs is a prerequisite for building a positive reputation. And of course, the more attractive an industry we become, the more great candidates, partners, and supporters we’ll attract. ... Speaking of inspiring the next generation, 2024 can be the year in which we embrace youth. How do we attract more young people into the industry? By inspiring them. The data center sector is a dynamic, exciting, and rapidly growing sector. We want to ensure this is being effectively articulated in print, across social media, and online.


Is your cloud security strategy ready for LLMs?

When employees and contractors use those public models, especially for analysis, they will be feeding those models internal data. The public models then learn from that data and may leak those sensitive corporate secrets to a rival who asks a similar question. “Mitigating the risk of unauthorized use of LLMs, especially inadvertent or intentional input of proprietary, confidential, or material non-public data into LLMs” is tricky, says George Chedzhemov, BigID’s cybersecurity strategist. Cloud security platforms can help, he adds, especially for access controls and user authentication, encryption of sensitive data, data loss prevention, and network security. Other tools are available for data discovery and surfacing sensitive information in structured, unstructured, and semi-structured repositories. “ It is impossible to protect data that the organization has lost track of, data that has been over-permissioned, or data that the organization is not even aware exists, so data discovery should be the first step in any data risk remediation strategy, including one that attempts to address AI/LLM risks,” says Chedzhemov.


Shadow AI poses new generation of threats to enterprise IT

Functional risks stem from an AI tool's ability to function properly. For example, model drift is a functional risk. It occurs when the AI model falls out of alignment with the problem space it was trained to address, rendering it useless and potentially misleading. Model drift might happen because of changes in the technical environment or outdated training data. ... Operational risks endanger the company's ability to do business. Operational risks come in many forms. For example, a shadow AI tool could give bad advice to the business because it is suffering from model drift, was inadequately trained or is hallucinating -- i.e., generating false information. Following bad advice from GenAI can result in wasted investments -- for example, if the business expands unwisely -- and higher opportunity costs -- for example, if it fails to invest where it should. ... Legal risks follow functional and operational risks if shadow AI exposes the company to lawsuits or fines. Say the model advises leadership on business strategy. But the information is incorrect, and the company wastes a huge amount of money doing the wrong thing. Shareholders might sue.


Creating a Data Quality Framework

A start-up business may not initially have a need for organizing massive amounts of data (it doesn’t yet have massive amounts of data to organize), but a master data management (MDM) program at the start can be remarkably useful. Master data is the critical information needed for doing business accurately and efficiently. For example, the business’s master data contains, among other things, the correct addresses of the start-up’s new customers. Master data must be accurate to be useful – the use of inaccurate master data would be self-destructive. If the organization is doing business internationally, it may need to invest in a Data Governance (DG) program to deal with international laws and regulations. Additionally, a Data Governance program will manage the availability, integrity, and security of the business’s data. An effective DG program ensures that data is consistent and trustworthy and doesn’t get misused. A well-designed DG program includes not only useful software, but policies and procedures for humans handling the organization’s data. A Data Quality framework is normally developed and used when an organization has begun using data in complicated ways for research purposes. 


Meta Is Being Urged to Crack Down on UK Payment Scams

Since social media market platforms such as Facebook Marketplace do not have dedicated payment portals that accept payment cards, Davis said, standard security practices adopted by card issuers cannot be used to protect customers. As a result, preventing fraud on social media platforms is a challenge, he said. "To tackle this, we need greater action from Meta to stop fraudulent ads from being put in front of the U.K. consumers," Davis said. Meta Public Policy Mead Philip Milton, who testified before the committee, said his company takes fraud prevention "extremely seriously." Milton said Meta has adopted such measures as verifying ads on its platforms and permitting only financial ads that have cleared the U.K. Financial Services Verification process rolled out by the British Financial Conduct Authority. "A good indicator of fraud is fake accounts, as scammers generally tend to use fake accounts to carry out scams. As fraud prevention, Meta removed 827 million fake accounts in the third quarter of 2023," Milton said. Microsoft Government Affairs Director Simon Staffell said the computing giant pursues criminal infrastructure disruption as one of its fraud prevention strategies.



Quote for the day:

"If you are willing to do more than you are paid to do, eventually you will be paid to do more than you do." -- Anonymous

Daily Tech Digest - February 07, 2024

Can Enterprise DevOps Ever Measure Up?

At the elitist of organizations, by Forney’s math, developers are spending up to 70% of their time writing and testing code, while the rest of their time is filled with meetings and context switching. But when you examine that exceptionally high 70%, she explained, you then have to consider how much time they are just “keeping the lights on” or dealing with customer support or are on call, versus “how much time they’re spending on the creation of new value.” She said it becomes a “diminishing bucket of space.” Especially at older organizations that haven’t quite migrated to the cloud and haven’t quite moved completely from Waterfall to agile, she finds developers are often focusing on the wrong work. Or they are building workarounds on top of their technical debt as a quick win, instead of fixing with a long-term vision in mind. “We look at organizations spending a huge amount of time doing planning and thinking these are our top priorities in the organization, but in reality, what’s going on? Are devs spending actually what you would expect to be the bulk of their time [on this]?” Forney said that “more often than not, what you see is they’re spending like 5% of their time across the entire organization level of effort on these most important things.”


IT Security Hiring Must Adapt to Skills Shortages

Omri Weinberg, co-founder and CRO at DoControl, says promoting cybersecurity education, offering mentorship and internships, increasing diversity, and providing ongoing professional development opportunities are all ways to help companies close the cybersecurity skills gap. “Collaboration among stakeholders is essential to address this challenge effectively,” he says. “It all starts at the top.” When it becomes a top priority to the board of directors, CEO and other executives, they will invest more time, money, and effort to educate the next generation alongside educational institutions to create more awareness and opportunities for the future of the cyber workforce. “Cybersecurity is one of the fastest evolving industries,” Sunil Muralidhar, vice president of growth and strategic initiatives at ColorTokens, explains via email. “Regardless of the specific specialization an individual might choose to focus on, creative thinking and problem-solving skills are the best skills an employee can have.” Also critical is the ability to collaborate with teams across the company, who may have varying degree of technical or security skills.


Help for generative AI is on the way

Retrieval-augmented generation, or RAG, is a common method for adding context to an interaction with an LLM. Under the bonnet, RAG retrieves supplementary content from a database system to contextualize a response from an LLM. The contextual data can include metadata, such as timestamp, geolocation, reference, and product ID, but could in theory be the results of arbitrarily sophisticated database queries. This contextual information serves to help the overall system generate relevant and accurate responses. The essence of this approach lies in obtaining the most accurate and up-to-date information available on a given topic in a database, thereby refining the model’s responses. A useful by-product of this approach is that, unlike the opaque inner workings of GPT-4, if RAG forms the foundation for the business LLM, the business user gains more transparent insight into how the system arrived at the presented answer. If the underlying database has vector capabilities, then the response from the LLM, which includes embedded vectors, can be used to find pertinent data from the database to improve the accuracy of the response


Meta to label AI-generated images from Google, OpenAI and Adobe

“We’re building this capability now, and in the coming months we’ll start applying labels in all languages supported by each app,” Clegg added. The move to label AI-generated images from companies, such as Google, OpenAI, Adobe, Shutterstock, and Midjourney, assumes significance as 2024 will see several elections taking place in several countries including the US, the EU, India, and South Africa. This year will also see Meta learning more about how users are creating, and sharing AI-generated content and what kind of transparency netizens are finding valuable, the Clegg said. Clegg’s statement about elections rings in a reminder of the Cambridge Analytica scandal, unearthed by the New York Times and The Observer back in 2018, that saw Facebook data of at least 50 million users being compromised. Last month, ChatGPT-maker OpenAI suspended two developers who created a bot mimicking Democratic presidential hopeful Congressman Dean Phillips, marking the company’s first action against the misuse of AI. Meta, according to Clegg, already marks images created by its own AI feature, which includes attaching visible markers and invisible watermarks. 


AI is supercharging collaboration between developers and business users

AI enables team members "to create and share content more easily, automate, and optimize business processes more efficiently," he continues. "It enhances team communications by bringing clarity and utilizing transcripts to leverage exact words to remove ambiguity. All of this helps learning and development, and fosters team culture and engagement." The company also employs "AI-powered chatbots that can translate messages, summarize conversations, and provide relevant information," Naeger states. "AI can also help teams share data and insights more easily, by creating visualizations, dashboards, and reports. AI can help teams coordinate their tasks and workflows more efficiently, by automating or optimizing some of the processes." While AI-enhanced collaboration in IT sites is already happening, the emerging technology is still very much a work in progress. The move to AI-fueled collaboration means "organizations need to adapt and be prepared for shifts in how these teams work, integrating AI-driven metrics and managing AI tools," says Ammanath. 


Cybersecurity teams hesitate to use automation in TDIR workflows

When organizations were asked about the TDIR management areas where they require the most help, 36% of organizations expressed the need for third-party assistance in managing their threat detection and response, citing the challenge of handling it entirely on their own. This highlights a growing opportunity for the integration of automation and AI-driven security tools. The second most identified need, at 35%, was a desire for improved understanding of normal user and entity and peer group behaviour within their organization, demonstrating a demand for TDIR solutions equipped with user and entity behaviour analytics (UEBA) capabilities. These solutions should ideally minimise the need for extensive customisation while offering automated timelines and threat prioritisation. “As organizations continue to improve their TDIR processes, their security program metrics will likely look worse before they get better. But the tools exist to put them back on the front foot,” continued Moore. “Because AI-driven automation can aid in improving metrics and team morale, we’re already seeing increased demand to build even more AI-powered features. ...”


6 best practices for third-party risk management

CISOs can’t adequately manage third-party security threats when they do not have a complete picture of the third parties within their organization, says Murray, who is also president and CAO at Murray Security Services. This may seem like an obvious point, but Murray and others say this is a particularly challenging task as an increasing amount of technology is now deployed by business units instead of a centralized IT function committed to inventorying all tech assets. So, CISOs need to implement strategies for identifying and maintaining an accurate, comprehensive, and up-to-date inventory of the third parties whose security risks must be assessed and managed, Murray says. There are certainly software solutions that help here, but Valente advises CISOs to build in other steps to help ferret out problems at third parties. For example, she says CISOs can work with the finance department to review recurring payments (including those on corporate credit cards) to identify new software subscriptions that were bought without involving the organization’s procurement department and, thus, haven’t yet been added to the inventory list.


Unstructured Data Management: Plan Your Security and Governance

Although it may sound obvious, you need holistic understanding of all data in storage. Gaps in visibility, hidden applications, obscure data silos in branch offices -- this all contributes to higher risk if the data is not managed properly. Consider that protected data is going to end up in places where it shouldn’t, such as on forgotten or underutilized file servers and shadow IT cloud services. Employees unwittingly copy sensitive data to incompliant locations more often than you’d think. You’ll need a way to see all your data in storage and search across it to find the files to segment for security and compliance needs. You can use the data management capabilities in your NAS/SAN/cloud storage products to search for file types such as HR and IP data, but you’ll need to integrate visibility across all storage vendors and clouds if you use more than one vendor’s solution. ... IT infrastructure teams must collaborate with security and network teams to procure, install, and manage new storage and data management technology, but a more formal process centered around the data itself is required. This may involve stakeholders from legal, compliance, risk management, finance, and IT directors in key business units. 


Crucial Airline Flight Planning App Open to Interception Risks

Researchers from Pen Test Partners found that an App Transport Security (ATS) feature in Flysmart+ Manager that would have forced the app to use HTTPS had not been enabled. The app did not have any form of certificate validation either, leaving it exposed to interception on open and untrusted networks. "An attacker could use this weakness to intercept and decrypt potentially sensitive information in transit," PTP said in its report this week. Ken Munro, a partner at the pen testing firm, says the biggest concern had to do with the potential for attacks on the app that could cause so called runway excursions — or veer-offs and overruns — and potential tail strikes on takeoff. "The EFB is used to calculate the required power from the engines for departure, also the required braking on landing," Munro says. "We showed that, as a result of the missing ATS setting, one could potentially tamper with the data that is then given to pilots. That data is used during these 'performance' calculations, so pilots could apply insufficient power or not enough braking action," he says. The ATS issue in Flysmart+ Manager is just one of several vulnerabilities that PTP has uncovered in EFBs in recent years.


Why CIOs back API governance to avoid tech sprawl

APIs are ubiquitous within modern software architectures, working behind the scenes to facilitate myriad connected capabilities. “As enablers for the integration of data and business services across platforms, APIs are very aligned with current tech trends,” says Antonio Vázquez, CIO of software company Bizagi. “Reusability, composability, accessibility, and scalability are some of the core elements that a good API strategy can provide to support tech trends like hybrid cloud, hyper-automation, or AI.” For these reasons, API-first has gathered steam, a practice that privileges the development of the developer-facing interface above other concerns. “API-first strategy becomes critical to navigate contemporary tech trends, foster innovation, and ensure adaptability in a rapidly evolving technological landscape,” says Krithika Bhat, CIO of enterprise flash storage provider Pure Storage. She considers the increasing adoption of cloud computing and microservice architectures to be top drivers of formalized API-first approaches. Digital transformation and growing reliance on third-party services are key contributors as well, she adds.



Quote for the day:

“You are never too old to set another goal or to dream a new dream.” -- C.S. Lewis

Daily Tech Digest - February 06, 2024

Championing privacy-first security: Harmonizing privacy and security compliance

When security solutions are crafted with privacy as a central consideration, organisations can deploy robust security measures while safeguarding the personal data of their customers and employees. A comprehensive cost-benefit analysis reveals significant advantages in adopting a privacy-first approach to security. For instance, proactively blocking malware before it infiltrates an organisation’s systems can avert a potential data breach. Given the average cost of US$4.45 million in 2023, coupled with the consequential impact on brand reputation and legal ramifications, preventing even a single data breach becomes paramount for any company. Hence, the importance of industry-leading security measures is indisputable. Any reputable security company should provide solutions that limit its access to sensitive data and ensure the protection of the personal data entrusted to its care. ... A privacy-first security program assesses the risks associated with both implementing and not implementing security measures. If the advantages of deploying a security solution, such as email scanning, outweigh the drawbacks – which is highly probable – the organisation should proceed with the careful implementation of this capability.


Far Memory Unleashed: What Is Far Memory?

Far memory is a memory tier between DRAM and Flash that has a lower cost per GB than DRAM and a higher performance than Flash. Far memory works by disaggregating memory and allowing nodes or machines to access the memory of a remote node/machine via compute express link. Memory is the most contested and least elastic resource in a data center. Currently, servers can only use local memory, which may be scarce on the local system but abundant on other underutilized servers. With far memory, local machines can use remote machine’s memory. By introducing far memory into the memory tier and moving less frequently accessed data to far memory, the system can perform efficiently with low DRAM and reduce the total cost of ownership. Far memory uses a remote machine’s memory as a swap device, either by using idle machines or by building memory appliances that only serve to provide a pool of memory shared by many servers. This approach optimizes memory usage and reduces over-provisioning. However, far memory also has its own challenges. Swapping out memory pages to remote machines increases the failure domain of each machine, which can lead to a catastrophic failure of the entire cluster. 


Four reasons your agency's security infrastructure isn't agile enough

There are four key considerations for integrating security architecture effectively in an agile environment - Cross-Functional Collaboration: Security experts must actively engage with developers, testers, and product owners. Collaborating with experts helps create a shared understanding of security requirements and facilitates quick resolution of security-related issues. Embedding security professionals within Agile teams can enhance real-time collaboration and ensure consistent security controls. Security Training and Awareness: Given the rapid pace of an Agile sprint, all team members should be equipped with the knowledge to write secure code. ... Foster a Security Culture: Foster a culture where security is seen as everyone's responsibility, not just the security team's. Adapt the organizational mindset to value security equally with other business objectives. ... Security Champions within Agile Teams: Identify and nurture 'Security Champions' within each Agile team. These individuals with a keen interest in security act as a bridge between the security team and their respective agile teams. They help promote security best practices, ensuring security is not overlooked amidst other technical considerations.


AI Legislation: Enterprise Architecture Guide to Compliance

Artificial intelligence (AI) tools are so easy to leverage that they can be used by anyone within your organization without technical support. This means that you need to keep a careful eye on, not just the authorized applications you leverage, but what AI tools your colleagues could be using without authorization. In leveraging AI tools to generate content for your organization, your employees could unwittingly input private data into the public instance of ChatGPT. Not only does this share that data with ChatGPT's vendor, OpenAI, but it actually trains ChatGPT on that content, meaning the AI tool could potentially output that information to another user outside of your organization. Alternatively, overuse of generative AI tools without proper supervision could lead to factual or textual errors being published to your customers. Gen AI tools need careful supervision to ensure they don't "hallucinate" or produce mistakes, as they are unable to self-edit. It's equally important to be able to report back to legislators on what AI is being used across your company, so they can see you're compliant. This will likely become a regulatory requirement in the near future.


Choosing a disaster recovery site

The first option is to set up your own secondary DR data center in a different location from your primary site. Many large enterprises go this route; they build out DR infrastructure that mirrors what they have in production so that, at least in theory, it can take over instantly. The appeal here lies in control. Since you own and operate the hardware, you dictate compatibility, capacity, security controls and every other aspect. You’re not relying on any third party. The downside of course, lies in cost. All of that redundant infrastructure sitting idle doesn’t come cheap. ... The second approach is to engage an external DR service provider to furnish and manage a recovery site on your behalf. Companies like SunGard built their business around this model. The appeal lies in offloading responsibility. Rather than build out your own infrastructure, you essentially reserve DR data center capacity with the provider. ... The third option for housing your DR infrastructure is leveraging the public cloud. Market leaders like AWS and Azure offer seemingly limitless capacity that can scale to meet even huge demands when disaster strikes. 


How CISOs navigate policies and access across enterprises

Simply speaking, if existing network controls are now being moved to the cloud, the scope of technical controls does not drastically differ from legacy approaches. The technology, however, has massively evolved towards platform-centric controls, and that for a good reason. Isolated controls cause complexity, and if you are moving your perimeter to a hyperscaler, both your users and their devices will no longer be managed by the corporate on-prem security controls either. A good CASB to broker between user and data is key, as is identity and access management. What’s now new is workload protection requirements á la CSAP technology. In addition to increasing sophistication and the number of security threats and successful breaches, most enterprises further increase risk by “rouge IT” teams leveraging cloud environments without the awareness and management by security teams. Cloud deployments are typically deployed faster and with less planning and oversight than data center or on-site environment deployments. Cloud security tools should be an extension of your other premise-based tools for ease of management, consistency of policy enforcement and cost savings due to additional purchase commitments, training, and certification non-duplicity. 


What to Know About Machine Customers

In the realm of customer service and support, machine customers are like virtual assistants or smart devices (think of Siri or Alexa) that carry out customer service tasks on behalf of actual human customers. Alok Kulkarni, CEO and Co-founder of Cyara, says the emergence of machine customers introduces a new dynamic, requiring organizations to adapt their existing support strategies. “This might include developing specific interfaces and communication channels tailored for interactions with machine customers,” he explains in an email interview. Organizations must create additional self-service options specifically designed for machine customers. “Unlike traditional customer support approaches, catering to machine customers requires a nuanced understanding of their specific needs and operational dynamics,” Kulkarni explains. This means designing self-service interfaces that are not only user-friendly for machines but also align with the intricacies of autonomous negotiation and purchasing processes. These interfaces should empower machine customers to navigate through various stages of transactions autonomously, from product selection to payment processing, ensuring a streamlined and frictionless experience.


Google: Govs Drive Sharp Growth of Commercial Spyware Cos

Much of the concern has to do with the explosion in the availability of tools and services that allow governments and law enforcement to break into target devices with impunity, harvest information from them, and spy unchecked on victims. The vendors selling these tools — most of which are designed for mobile devices — have often openly pitched their wares as legitimate tools that aid in law enforcement and counter-terrorism efforts. But the reality is that repressive governments have routinely used spyware tools against journalists, activists, dissidents, and opposition party politicians, said Google. The company's report cites three instances of such misuse: one that targeted a human rights defender working with a Mexico-based rights organization; another against an exiled Russian journalist; and the third against the co-founder and director of a Salvadorian investigative news outlet. The researcher attributes much of the recent growth in the CSV market to strong demand from governments around the world to outsource their need for spyware tools rather than have an advanced persistent threat build them in-house. 


How To Build Autonomous Agents – The End-Goal for Generative AI

From a technology perspective, there are five elements that go into autonomous agent designs: the agent itself, for processing; tools, for interaction; prompt recipes, for prompting and planning; memory and context, for training and storing data; and APIs / user interfaces, for interaction. The agent at the center of this infrastructure leverages one or more LLMs and the integrations with other services. You can build this integration framework yourself, or you can bring in one of the existing orchestration frameworks that have been created, such as LangChain or LlamaIndex. The framework should provide the low-level foundational model APIs that your service will support. It connects your agent to the resources that you will use as part of your overall agent, including everything from existing databases and external APIs, to other elements over time. It also has to take into account what use cases you intend to deliver with your agent, from chatbots to more complex autonomous tasks. Existing orchestration frameworks can take care of a lot of the heavy lifting involved in managing LLMs, which makes it much easier and faster to build applications or services that use GenAI.


How Platform and Site Reliability Engineering Are Evolving DevOps

Actually, failure should not just be OK but welcome. Most organizations are averse to failure, but it’s only through our failures in these spaces that we can learn and grow and figure out how to best position, leverage, and continue to imagine the roles of DevOps, platform engineers, and SRE. I’ve seen this play out in large companies that went all in on DevOps and then realized that they needed a team focused on breaking down any barriers that presented themselves to developers. At scale, DevOps - even with the tools provided by the internally focused platform engineering team - didn’t really cut it. These companies then integrated the SRE function, which filled DevOps’ reliability and scalability gaps. That worked until these companies realized that they were reinventing the wheel - dozens of times. Different engineering teams within the organization were doing things just differently enough - different setups, different processes, different expectations - that they needed separate setups to put out a service. The SREs were seeing all of this after the fact, which led them to circle back to the realization that different teams needed to be using the same development building blocks. Frustrating? Yes. The cost of increasing efficiency in the future? Absolutely.



Quote for the day:

“It’s better to look ahead and prepare, than to look back and regret.” -- Jackie Joyner-Kersee

Daily Tech Digest - February 05, 2024

8 things that should be in a company BEC policy document

Smart boards and CEOs should demand that CISOs include BEC-specific procedures in their incident response (IR) plans, and companies should create policies that require security teams to update these IR plans regularly and test their efficacy. As a part of that, security and legal experts recommend that organizations plan for legal involvement across all stages of incident response. Legal especially should be involved in how incidents are communicated with internal and external stakeholders to ensure the organization doesn’t increase its legal liability if a BEC attack hits. “Any breach may carry legal liability, so it’s best to have the discussion before the breach and plan as much as possible to address issues in advance rather than to inadvertently take actions that either causes liability that might not otherwise have existed, or increases liability beyond what would have existed,” Reiko Feaver, a privacy and data security attorney and partner at Culhane Meadows, tells CSO. Feaver, who advises clients on BEC best practices, training and compliance, says BEC policy documents should stipulate that legal be part of the threat modeling team, analyzing potential impacts from different types of BEC attacks so the legal liability viewpoint can be folded into the response plan.


Many Employees Fear Being Replaced by AI — Here's How to Integrate It Into Your Business Without Scaring Them.

The first goal of integrating AI should be understanding the quickest way for it to start having a positive monetary benefit. While our AI project is still a work in progress, we are expecting to increase revenue anywhere from $2 million to $20 million as a result of a first round of investment of under $100,000. But to achieve that type of result, leaders need to get comfortable with AI and figure out the challenges and complexities they might encounter. ... If you are a glass-half-full kind of person, listening to the glass-half-empty kind of person offers a complementary point of view. Whenever I have ideas to really move the numbers, I tend to act fast. It is crucial that people understand that I am not fast-tracking AI integration because I am unhappy with our current process or people. It is because I am happy that I will not risk what we already have unless I am fully sold on the range of the upside — and I want to expedite the learning process to get to those benefits faster. I still want to talk to as many people as I can — employees, developers, marketing folks, product managers, external investors — both for the tone of responses and any major issues. Those red flags may be great things to consider or I need to give people more information. Either way, my response can alleviate their concerns. 


The role of AI in modernising accounting practices

Accountants, like any other professionals, have varied views on AI—some see it as a friend, appreciating its ability to automate tasks, enhance efficiency, and reduce errors. They view AI as a valuable ally, freeing up time for strategic and analytical work. On the flip side, others perceive AI as a threat, fearing job displacement and the loss of the human touch in financial decision-making. Striking a balance between leveraging AI’s benefits for efficiency while preserving the importance of human skills is crucial for successful integration into accounting practices. ... Notably, machine learning algorithms and natural language processing are gaining prominence, enabling accountants to delve into more sophisticated tasks such as intricate data analysis, anomaly detection, and the generation of actionable insights from complex datasets. As technology continues to evolve, the trajectory of AI in accounting is expected to expand further. Future developments might include more sophisticated predictive analytics, enhanced natural language understanding for improved communication, and increased automation of compliance-related tasks. 


10 ways to improve IT performance (without killing morale)

When working to improve IT performance, leaders frequently focus on the technology instead of zeroing in on the business process. “We are usually motivated to change what’s within the scope of our control because we can move more quickly and see results sooner,” says Matthew Peters, CTO at technology services firm CAI. Yet a technology-concentrated approach can create significant risk, such as breaking processes that lie outside of IT or overspending on solutions that may only perpetuate the issue that still must be resolved. ... A great way to improve IT performance while maintaining team morale is by developing a culture of collaboration, says Simon Ryan, CTO at network management and audit software firm FirstWave. “Encourage team members to communicate openly — listen to their concerns and provide opportunities for skill development,” he explains. “This strategy is advantageous because it links individual development to overall team performance, thereby fostering a sense of purpose.” Ignoring the human factor is the most common team-related blunder, Ryan says. “An overemphasis on tasks and deadlines without regard for the team’s well-being can lead to burnout and unhappiness,” he warns. 


How Digital Natives are Reshaping Data Compliance

With their forward-thinking mindsets, today's chief compliance officers are changing the perception of emerging technologies from threats to opportunities. Rather than reacting with outright bans, they thoughtfully integrate new tools into the compliance framework. This balances innovation with appropriate risk management. It also positions compliance as an enabler of progress rather than a roadblock. The benefits of this mindset are many: A forward-thinking culture that thoughtfully integrates innovations into business processes and compliance frameworks. This allows organizations to harness the benefits of technology ethically. With an opportunistic mindset, compliance teams can explore how new tools like AI, blockchain, and automation can be used to make compliance activities more effective, efficient and data driven. When seen as working alongside business leaders to evaluate risks and implement appropriate guardrails for new tech, compliance teams’ collaborative approaches enable progress and innovation. These new technologies open up possibilities to continuously improve and modernize compliance programs. An opportunity-driven perspective seizes on tech's potential.


How to choose the right NoSQL database

Before choosing a NoSQL database, it's important to be certain that NoSQL is the best choice for your needs. Carl Olofson, research vice president at International Data Corp. (IDC), says "back office transaction processing, high-touch interactive application data management, and streaming data capture" are all good reasons for choosing NoSQL. ... NoSQL databases can break down data into segments—or shards—which can be useful for large deployments running hundreds of terabytes, Yuhanna says. “Sharding is an essential capability for NoSQL to scale databases,” Yuhanna says. “Customers often look for NoSQL solutions that can automatically expand and shrink nodes in horizontally scaled clusters, allowing applications to scale dynamically.” ... Some NoSQL databases can run on-premises, some only in the cloud, while others in a hybrid cloud environment, Yuhanna says. “Also, some NoSQL has native integration with cloud architectures, such as running on serverless and Kubernetes environments,” Yuhanna says. “We have seen serverless as an essential factor for customers, especially those who want to deliver good performance and scale for their applications, but also want to simplify infrastructure management through automation.”


What’s Coming in Analytics (And How We’ll Get There)

The notion of composability is not just a buzzword; it's the cornerstone of modern application development. The industry is gradually moving towards a more composable enterprise, where modular, agile products integrate insights, data, and operations at their core. This transition facilitates the creation of innovative experiences tailored to user needs, significantly lowering development costs, accelerating time to market and fostering a thriving generative AI ecosystem. This more agile application development environment will also lead to a convergence of AI and BI, such that AI-powered embedded analytics may even supplant current BI tools. This will lead to a more data-driven culture where the business uses real-time analytics as an integral part of its daily work, enabling more proactive and predictive decision-making. ... As we advance into the future, the analytics industry is poised on the edge of a monumental shift. This evolution is akin to discovering a new, uncharted continent in the realm of data processing and complex analysis. This exploration into unknown territories will reveal analytics capabilities far beyond our current understanding.


Businesses banning or limiting use of GenAI over privacy risks

Organizations recognize the need to reassure their customers about how their data is being used. “94% of respondents said their customers would not buy from them if they did not adequately protect data,” explains Harvey Jang, Cisco VP and Chief Privacy Officer. “They are looking for hard evidence the organization can be trusted as 98% said that external privacy certifications are an important factor in their buying decisions. These stats are the highest we’ve seen in Cisco’s privacy research over the years, proving once more that privacy has become inextricably tied to customer trust and loyalty. This is even more true in the era of AI, where investing in privacy better positions organizations to leverage AI ethically and responsibly.” Despite the costs and requirements privacy laws may impose on organizations, 80% of respondents said privacy laws have positively impacted them, and only 6% said the impact has been negative. Strong privacy regulation boosts consumer confidence and trust in the organizations where they share their data. Further, many governments and organizations implement data localization requirements to keep specific data within a country or region.


4 ways to help your organization overcome AI inertia

The research suggests the tricky combination of a fearful workforce and the unpredictability of the current regulatory environment means many organizations are still stuck at the AI starting gate. As a result, not only are pilot projects thin on the ground, but so are the basic foundations -- in terms of both data frameworks and strategies -- upon which these initiatives are created. About two-fifths (41%) of data leaders said they have little or no data governance framework, which is just a percentage higher than the previous year's Maturity Index, when 40% of data leaders said they have little or no data governance framework, which is a set of standards and guidelines that enable organizations to manage their data effectively. Just over a quarter of data leaders (27%) said their organization has no data strategy at all, which is only a slight improvement on the previous year's figure (29%). "I get why not everybody's quite there yet," says Carruthers, who, as a former CDO, understands the complexities involved in strategy and governance. ... The good news is some digital leaders are making headway. Andy Moore, CDO at Bentley Motors, is focused on building the foundations for the exploitation of emerging technologies, such as AI.


Data Lineage in Modern Data Engineering

There are generally two types of data lineage, namely forward lineage and backward lineage. Forward Lineage - It is known as downstream lineage; it tracks the flow of data from its source to its destination. It outlines the path that data takes through various stages of processing, transformations, and storage until it reaches its destination. It helps developers understand how data is manipulated and transformed, aiding in the design and improvement of the overall data processing workflow and quickly identifying the point of failure. By tracing the data flow forward, developers can pinpoint where transformations or errors occurred and address them efficiently. It is essential for predicting the impact of changes on downstream processes. ... Backward Lineage -  It is also known as upstream lineage; it traces the path of data from its destination back to its source. It provides insights into the origins of the data and the various transformations it undergoes before reaching its current state. It is crucial for ensuring data quality by allowing developers to trace any issues or discrepancies back to their source. By understanding the data's journey backward, developers can identify and rectify anomalies at their origin. 



Quote for the day:

“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick

Daily Tech Digest - February 04, 2024

Prepare now for when quantum computers break biometric encryption: Trust Stamp

While experts expect quantum computers will not be able to scale to defeat such systems for at least another ten years, the white paper claims, entities should address “harvest now, decrypt later” (HNDL) attacks proactively. Through an HNDL approach, an attacker could capture encrypted data pending the availability of quantum computing-enabled decryption. It is worth noting that this cyber threat would be heavily resource-intensive to perform. Such an attack would most likely only be feasible by a nation-state and would target information that would remain extremely valuable for decades in the future. Still, HDNL is an especially concerning threat for biometric PII, due to its relative permanence. Certain data encryption methods are particularly vulnerable. Asymmetric, or public-key cryptography, uses a public and private key to encrypt and decrypt information. One of the keys can be stored in the public domain, which enables connections between “strangers” to be established quickly. Because the keys are mathematically related, it is theoretically possible to calculate a private key from a public key. 


Managing the hidden risks of shadow APIs

In today's dynamic API landscape, maintaining comprehensive visibility into the security posture of API endpoints is paramount. All critical app and API security controls necessary to protect an app's entire ecosystem can be deployed and managed through the unified API security console of the F5 Distributed Cloud Platform. This allows DevOps and SecOps teams to observe and quickly identify suspected API abuse as anomalies are detected as well as create policies to stop misuse. This requires the use of ML models to create baselines of normal API usage patterns. Continuous ML-based traffic monitoring allows API security to predict and block suspicious activity over time. Deviations from these baselines and other anomalies trigger alerts or automated responses to detect outliers, including rogue and shadow APIs. Dashboards play a crucial role in providing the visibility required to monitor and assess the security of APIs. The F5 Distributed Cloud WAAP platform extends beyond basic API inventory management by presenting essential security information based on actual and attack traffic.


Cybersecurity Frontline: Securing India’s digital finance infrastructure in 2024

Fintech companies are progressively allowing AI to handle routine tasks, freeing human resources for more complex challenges. AI systems are also being used to simulate cyberattacks, testing systems for vulnerabilities. This shift highlights the critical role of AI and ML in modern cybersecurity, moving beyond mere automation to proactive threat detection and system fortification. The human element, often the weakest link in cybersecurity, is receiving increased attention. Fintech firms are investing in employee training to build resilience against cyberattacks, focusing on areas such as phishing, social engineering, and password security. One of the most notable advancements in this domain is the use of AI-powered fraud detection systems. For instance, a global fintech leader has implemented a deep learning model that analyses around 75 billion annual transactions across 45 million locations to detect and prevent card-related fraud. Despite, financial institutions keep on educating the customers on social engineering frauds, but the challenge is when customers willingly provide OTPs, payment/banking credentials which resulted misuse in the account.


The evolving challenge of complexity in cybersecurity

One of the biggest challenges when it comes to cybersecurity is the complexity that has evolved due to the need to use an increasing array of products and services to secure our businesses. This is largely due to the underlying complexity of our IT environments and the broad attack surface this creates. With the growing adoption of cloud and the more dispersed nature of our workforces, the perimeter approach to security that worked well in the 20th century is no longer adequate. In the same way the moats and castle walls of the Middle Ages gave good protection then but would not stand up to a modern attack, traditional firewalls and VPNs are no longer suitable now and invariably need to be augmented with lots of other layers of security tools. Modern, more flexible and (arguably) simpler zero-trust approaches such as secure access service edge, zero-trust network access and microsegmentation need to be adopted. These technologies ensure that access to applications and data, no matter where they reside, is governed by simple, identity based policies that are easy to manage while delivering levels of security and visibility that legacy approaches cannot.


CIOs rise to the ESG reporting challenge

To achieve success, CIOs must first understand how ESG reporting fits within the company’s business strategy, Sterling’s Kaur says. Then they need to engage and align with the right people in the organization. The CFO and CSO top that list, but CIOs should branch out further, as “upstream processes is where the vast majority of sustainability and ESG story really happens,” says Marsha Reppy, GRC technology leader for EY Global and EY Americas. “You will not be successful without procurement, R&D, supply chain, manufacturing, sales, human resources, legal, and tax at the table.” Because ESG data is broadly dispersed throughout the organization, CIOs will need broad consensus on an ESG reporting strategy, but the triumvirate of CIO, CFO, and CHRO should be driving ESG reporting forward, Kaur says. “Business goals matter, financials matter, and employee engagement matters,” she says. “Creating this partnership has the benefit of bringing a cohesive view forward with the right goals.” CIOs must also educate themselves on the nitty gritty of ESG reporting to fully understand the complexity and breadth of the problem they’re trying to solve, EY’s Reppy says.


How to Get Platform Engineering Just Right

In the land of digital transformation, seeing is believing, which is where observability has a role to play. Improving observability is crucial for gaining insights into the platform’s performance and behavior, which involves integrating tools like event and project monitoring, cloud cost transparency, application performance, infrastructure health and user interactions. In a rapidly growing cloud environment, observability enables teams to keep track of what is happening in terms of cost, usage, availability, performance and security across a constantly transforming cloud infrastructure. Once a project has been deployed, it needs to be managed and maintained across all cloud providers, something which is critical for keeping costs to a minimum but is often a huge and messy task. Managing this effectively requires monitoring key performance indicators (KPIs) and setting up alerts for critical events, and using logs and analysis tools to gain visibility into application behavior, track errors, and troubleshoot issues more effectively. Finally, implementing tracing systems that can track the flow of requests across various microservices and components helps to identify performance bottlenecks, understand latency issues and optimize system behavior.


AI Officer Is the Hot New Job That Pays Over $1 Million

Executives spearheading metaverse efforts at Walt Disney Co., Procter & Gamble Co. and Creative Artists Agency left. Leon's LinkedIn profile (yes, he had one), no longer exists, and there's no mention of him on the company's website, other than his introductory press release. Publicis Groupe declined to comment on the record. Instead, businesses are scrambling to appoint AI leaders, with Accenture and GE HealthCare making recent hires. A few metaverse executives have even reinvented themselves as AI experts, deftly switching from one hot technology to the next. Compensation packages average well above $1 million, according to a survey from executive-search and leadership advisory firm Heidrick & Struggles. Last week, Publicis said it would invest 300 million euros ($327 million) over the next three years on artificial intelligence technology and talent.Play Video "It's been a long time since I have had a conversation with a client about the metaverse," said Fawad Bajwa, the global AI practice leader at the Russell Reynolds Associates executive search and advisory firm. "The metaverse might still be there, but it's a lonely place."


Heart of the Matter: Demystifying Copying in the Training of LLMs

A characteristic of generative AI models is the massive consumption of data inputs, which could consist of text, images, audio files, video files, or any combination of the inputs (a case usually referred to as “multi-modal”). From a copyright perspective, an important question (of many important questions) to ask is whether training materials are retained in the large language model (LLM) produced by various LLM vendors. To help answer that question, we need to understand how the textual materials are processed. Focusing on text, what follows is a brief, non-technical description of exactly that aspect of LLM training. Humans communicate in natural language by placing words in sequences; the rules about the sequencing and specific form of a word are dictated by the specific language (e.g., English). An essential part of the architecture for all software systems that process text (and therefore for all AI systems that do so) is how to represent that text so that the functions of the system can be performed most efficiently. Therefore, a key step in the processing of a textual input in language models is the splitting of the user input into special “words” that the AI system can understand.


2024: The year quantum moves past its hype?

By contrast, today’s quantum computers are capable of a just few hundred error-free operations. This leap may sound like a return to the irrational exuberance of previous years. But there are many tangible reasons to believe. The quantum computing industry is now connecting these short-term testbeds with long-term moonshots as it starts to aim for middle-term, incremental goals. As we approach this threshold, we’ll start to more intrinsically understand errors and fix them. We can start to model simple molecules and systems, developing more powerful quantum algorithms. Then, we can work on more interesting (and impactful) applications with each new generation/testbed of quantum computer. What will those applications be? We don’t know. And that’s OK. ... But first we need to develop better quantum algorithms and QEC techniques. Then, we will need fewer qubits to run the same quantum calculations and we can unlock useful quantum computing, sooner. As progress and pace continues to accelerate, 2024 will be the year when the conversation around quantum applications has real substance as we follow tangible goals, commit to realistic ambitions and unlock real results.


Adaptive AI: The Promise and Perils for Healthcare CTOs

Adaptive AI is a subset of artificial intelligence that can learn and adjust its behavior based on new data and changing circumstances. Unlike traditional AI systems, which are static and rule-based, Adaptive AI algorithms can continually improve and adapt to evolving situations. This technology draws inspiration from the human brain's capacity for learning and adaptation. ... Adaptive AI plays a pivotal role in identifying and mitigating security threats. CTOs can leverage AI to monitor network traffic continuously, identify anomalies including software flaws and misconfigurations, and respond to threats in real time, bolstering their organization's security. It can prioritize these vulnerabilities based on the potential impact and likelihood of exploitation, allowing CTOs to allocate resources for patching and remediation efforts effectively. ... CTOs can drive innovation in customer engagement and personalization with Adaptive AI algorithms. In the case of virtual healthcare, Adaptive AI can be used to power virtual care platforms that allow patients to connect with healthcare providers from anywhere. This can improve access to care, especially for rural or underserved populations.



Quote for the day:

“Things work out best for those who make the best of how things work out.” -- John Wooden