Daily Tech Digest - February 02, 2024

CISO accountability in the era of software supply chain security

A CISO now needs to start acting like a CFO on their very first day in the role. CISOs no longer have the freedom to prioritize business interests and subordinate cybersecurity, because they will be found liable for misrepresenting security practices in the event of a cyber-incident. CFOs can’t let some fraud, financial crime, absence of key stated controls, or insider dealing go while they ease into the role, and CISOs will need to start acting the same way regarding their company’s security program. While some may find this new era of CISO accountability a threat, they need to look at the massive opportunity as well — and the opportunity is quite big! Yes, CISOs will have more work to do with this new level of scrutiny and accountability. However, this new era will allow them to take a more senior and influential role in the organization, receive greater allocations of resources to maintain an appropriate level of perceived risk, prioritize critical enterprise security needs, and be fully transparent on what security issues their company is dealing with. And because CISOs and their respective companies will be more transparent and accountable, this should lead to greater trust in them from customers, board members, investors, employees, regulators, and the communities in which they operate.


From Chaos to Control: Nurturing a Culture of Data Governance

Data architecture encompasses the design, structure, and organization of data assets. It involves defining the blueprint for how data is collected, stored, processed, accessed, and managed throughout its lifecycle. Data architecture sets the foundation for data governance by establishing standards, principles, and guidelines for data management. It encompasses aspects such as data models, data flow diagrams, database design, and the integration of data across different systems. Effective data architecture is crucial for ensuring data consistency, integrity, and accessibility, aligning data assets with the organization's goals and objectives. Data modeling is a specific aspect of data architecture that involves creating visual representations (models) of the data and its relationships within an organization. This process helps in understanding and documenting the structure of data entities, attributes, and their interactions. Data modeling plays a vital role in data governance by providing a standardized way to communicate and document data requirements, ensuring a collective understanding among stakeholders. 


Cloud migration is still a pain

The cloud providers sold the cloud as something that needed to be leveraged ASAP, so massive workloads and data sets were lifted and shifted to this new “miracle platform.” Three things occurred: First, it was more expensive than we thought. I use the unproven number of the cloud costing 2.5 times what enterprises believed it would cost to operate workloads and data sets in the cloud. This all blew up in 2022, when we also had the accommodation of workloads moved during the pandemic, many with unimproved applications and data sets. Second, poorly designed, developed, and deployed applications moved from enterprise data centers to the cloud, where applications still need to be better designed, developed, and deployed. We’re paying more for them to run in the cloud since we’re paying for the existing inefficiencies. ... Finally, enterprises aren’t learning from their mistakes. I’ve often been taken aback by the amount of lousy cloud reality that most enterprises accept. Although some have moved back to enterprise data centers, some are indeed funding application and data optimization. We’re still getting a C- in returning value to the business, our shared objective.


The Growing Demand for Infrastructure Resiliency—How Digital Transformation Can Help

According to Bademosi, “”Integrating digital technologies is not just a trend, it is the next frontier in creating sustainable, resilient, and advanced infrastructure systems. As we look to the future, it is evident that digital technology will be at the heart of every innovation” The benefits of harnessing new technologies and transforming infrastructure seem limitless. But government agencies and industry partners may not know where to start. According to Bademosi, it begins by gauging the current state of critical infrastructure systems and what is needed for the future. What are the strengths, weaknesses, and potential opportunities available for infrastructure? Next, it’s important to foster collaborations across government agencies, industry leaders, and the communities that will be impacted by the proposed project. Industry partners and government agencies then need to empower their workforce with the training they need to deploy these technologies on future projects. Once training is complete, they can begin to experiment with these new technologies on smaller pilot projects, using them as workshops to test strategies.


Falling into the Star Gate of Hidden Microservices Costs

We’re not going to argue that monoliths are perfect. But an intentionally designed monolith has a comprehensible solution to each flaw, and unlike a microservices architecture, each one you resolve creates a feedback loop of improvement with internal scope. To improve your monolith in some dimension — performance scaling, the ease of onboarding for new developers, the sheer quality of your code — you need to invest in the application itself, not abstract the problem to a third party or accept a higher cloud computing bill, hoping that scale will solve your problems. Of their experience, the Amazon Prime Video team wrote, “Moving our service to a monolith reduced our infrastructure cost by over 90%. It also increased our scaling capabilities. … The changes we’ve made allow Prime Video to monitor all streams viewed by our customers and not just the ones with the highest number of viewers. This approach results in even higher quality and an even better customer experience.” Since the Amazon Prime Video engineering team published their blog post, many have argued about whether their move is a major win for monoliths, the same-old microservices architecture with new branding or a semantic misinterpretation of what a “service” is. 


The importance of IoT visibility in OT environments

The surge of sensory data volume and network traffic generated by IIoT devices can overwhelm existing network infrastructure. Outdated hardware and bandwidth constraints can severely cripple the efficient operation of these interconnected systems. Scaling up and modernizing infrastructure becomes imperative in paving the way for a flourishing IIoT ecosystem. ... In the intricate game of cyber defense, network visibility reigns supreme. The map and compass guide defenders through the ever-shifting digital landscape, illuminating the hidden pathways where threats dwell. Organizations navigate murky waters without it, blind to threat actors weaving through their systems. Network visibility emerges as the antidote, empowering defenders with a four-pronged shield: early threat detection, where anomalies transform into bright beacons revealing potential attacks before they escalate. Secondly, it facilitates swift incident response, allowing isolation and mitigation of the affected area like quarantining a digital contagion; proactive threat hunting, where defenders actively scour network data for lurking adversaries and hidden vulnerabilities, pre-empting attacks before they materialize. 


Embrace Change: Navigating Digital Transformation for Sustainable Success

Staying within the confines of one’s comfort zone for an extended period is ill-advised, especially in the face of disruptive innovations. The world has little patience for those who cling to past glories and turn a blind eye to emerging technologies. Historical examples, such as the decline of the Roman Empire, serve as stark reminders of the perils of stagnation and resistance to change. In a world that is in a constant state of flux, the choice to adapt or face extinction rests squarely on the shoulders of individuals and organizations. The significance of speed as a competitive edge cannot be overstated. Just as the velocity of an aircraft enables it to soar through the skies and the dynamic force of a fast-moving car propels it forward, adapting to the rapid pace of change is imperative for survival in the business realm. Embracing change willingly is not merely a suggestion; it is a strategic imperative. In a world characterized by constant evolution, the notion of being “too big to fail” is a myth. The decision to adapt is not dictated by external forces; it is entirely within the control of individuals and organizations.


“All About the Basics”: Cyber Hygiene in the Digital Age

In this world, the digital equivalent of leaving your front door unlocked and the windows wide open is a reality. The result? Well, it’s not pretty.First, there’s the risk of data breaches. These aren’t just inconveniences; they’re full-blown catastrophes. When we’re lax with updates and passwords, we’re essentially rolling out the red carpet for cybercriminals. They waltz in, pilfer sensitive data, and leave chaos in their wake. The fallout? Compromised personal information, financial loss, and let’s not forget the ever-lasting damage to our reputation. It’s the kind of nightmare that keeps grumpy CISOs up at night. Then there are the phishing attacks. Without proper awareness and training, our well-meaning but sometimes naïve users might unwittingly invite trouble right into our digital living room. It starts with an innocent click on what seems like a legitimate email. And before you know it, malware has spread through your systems like wildfire. The result? System downtimes, productivity loss, and a frantic race against time to contain the breach. And let’s not even get started on unsecured devices; it’s like leaving your secret plans in a cafe, waiting for the first curious bystander to pick them up. 


Navigating the New Era with Generative AI Literacy

As technology has evolved, that focus on data literacy has quickly transitioned into a focus on generative AI literacy -- a new breed of data literacy built on the core tenet of data literacy: data collection and curation, data visualization, and interpretation. With the advent of generative AI tools from industry leaders such as OpenAI, Google, Microsoft, and Anthropic, companies need their employees to know how to leverage these tools to create business value. Ultimately, data literacy and generative AI literacy have the same goals -- to drive effective business decision-making and to create organizational value. ... Generative AI’s power is due in part to its ability to accept such a wide array of inputs and prompts, but this also requires that employees learn to expand their thinking. As repetitive tasks are automated away, employees will be free to think more innovatively, which is not always intuitive for them. Educational institutions have focused on teaching students to learn facts for many years but are now being required to teach students how to think in terms of problem sets, alternative approaches, and innovative solution discovery. 


Risk Management is Never Having to Say, ‘I Am sorry’

Enterprise architecture is largely an exercise in risk management. Unless architecture organizations are willing to take on risk, they are unlikely to be perceived as influential partners in solving problems. Rory established his team as a solver of gnarly problems, not complaining bystanders, by accepting accountability to deliver the mobile commerce platform. One of the biggest categories of risk that architecture leaders must manage is relational risk, i.e. navigating the executive sociology. It wasn’t easy, but between Rory and Loretta the architecture department was able to achieve a key accomplishment, increasing the value of the company’s search and mobile toolkit assets, by establishing empathy with powerful business partners and creating a win / win solution to an urgent business problem. Architects can use what I call “organizational jujitsu” to gain support from agile teams by positioning high quality architecture assets as accelerators of agility. That is, if the architecture department can make the use of existing assets and contracts the fastest route to working tested product frequently delivered to customers, it can leverage the 



Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - February 01, 2024

Making the Leap From Data Governance to AI Governance

One of the AI governance challenges Regensburger is researching revolves around ensuring the veracity of outcomes, of the content that’s generated by GenAI. “It’s sort of the unknown question right now,” he says. “There’s a liability question on how you use…AI as a decision support tool. We’re seeing it in some regulations like the AI Act and President Biden’s proposed AI Bill Rights, where outcomes become really important, and it moves that into the governance sphere.” LLMs have the tendency to make things up out of whole cloth, which poses a risk to anyone who uses it. For instance, Regensburger recently asked an LLM to generate an abstract on a topic he researched in graduate school. “My background is in high energy physics,” he says. “The text it generated seemed perfectly reasonable, and it generated a series of citations. So I just decided to look at the citations. It’s been a while since I’ve been in graduate school. Maybe something had come up since then? “And the citations were completely fictitious,” he continues. “Completely. They look perfectly reasonable. They had Physics Review Letters. It had all the right formats. And at your first casual inspection it looked reasonable. 


Architecting for Industrial IoT Workloads: A Blueprint

The first step in an IIoT-enabled environment is to establish communication interfaces with the machinery. In this step, there are two primary goals: read data from machines (telemetry) and write data to machines Machines in a manufacturing plant can have legacy/proprietary communication interfaces and modern IoT sensors. Most industrial machines today are operated by programmable logic controllers (PLC). A PLC is an industrial computer ruggedized and adapted to control manufacturing processes—such as assembly lines, machines, and robotic devices — or any activity requiring high reliability, ease of programming and process fault diagnosis. However, PLCs provide limited connectivity interfaces with the external world over protocols like HTTP and MQTT, restricting external data reads (for telemetry) and writes (for control and automation). Apache PLC4X bridges this gap by providing a set of API abstractions over legacy and proprietary PLC protocols. PLC4X is an open-source universal protocol adapter for IIoT appliances that enables communication over protocols including, but not limited to, Siemens S7, Modbus, Allen Bradley, Beckhoff ADS, OPC-UA, Emerson, Profinet, BACnet and Ethernet.


6 user experience mistakes made for security and how to fix them

The challenge here is to communicate effectively with your non-experts in a way that they understand the “what” and “why” of cybersecurity. “The goal is to make it practical rather than condescending, manipulative, or punitive,” Sunshine says. “You need to take down that fear factor.” So long as people have the assurance that they can come clean and not be fired for that kind of mistake, they can help strengthen security by coming forward about problems instead of trying to cover them up. ... To achieve optimal results, you have to strike the right balance between the level of security required and the convenience of users. Much depends on the context. The bar is much higher for those who work with government entities, for example, than a food truck business, Sunshine says. Putting all the safeguards required for the most regulated industries into effect for businesses that don’t require that level of security introduces unnecessary friction. Failing to differentiate among different users and needs is the fundamental flaw of many security protocols that require everyone to use every security measure for everything.


5 New Ways Cyberthreats Target Your Bank Account

Deepfake technology, initially designed for entertainment, has evolved into a potent tool for cybercriminals. Through artificial intelligence and machine learning, these technologies fuel intricate social engineering attacks, enabling attackers to mimic trusted individuals with astonishing precision. This proficiency grants them access to critical data like banking credentials, resulting in significant financial repercussions. ... Modern phishing tactics now harness artificial intelligence to meticulously analyse extensive data pools, encompassing social media activities and corporate communications. This in-depth analysis enables the creation of highly personalised and contextually relevant messages, mimicking trusted sources like banks or financial institutions. This heightened level of customisation significantly enhances the credibility of these communications, amplifying the risk of recipients disclosing sensitive information, engaging with malicious links, or unwittingly authorising fraudulent transactions. ... Credential stuffing is a prevalent and dangerous method cybercriminals use to breach bank accounts. This attack method exploits the widespread practice of password reuse across multiple sites and services.


Italian Businesses Hit by Weaponized USBs Spreading Cryptojacking Malware

A financially motivated threat actor known as UNC4990 is leveraging weaponized USB devices as an initial infection vector to target organizations in Italy. Google-owned Mandiant said the attacks single out multiple industries, including health, transportation, construction, and logistics. "UNC4990 operations generally involve widespread USB infection followed by the deployment of the EMPTYSPACE downloader," the company said in a Tuesday report. "During these operations, the cluster relies on third-party websites such as GitHub, Vimeo, and Ars Technica to host encoded additional stages, which it downloads and decodes via PowerShell early in the execution chain." ... Details of the campaign were previously documented by Fortgale and Yoroi in early December 2023, with the former tracking the adversary under the name Nebula Broker. The infection begins when a victim double-clicks on a malicious LNK shortcut file on a removable USB device, leading to the execution of a PowerShell script that's responsible for downloading EMPTYSPACE (aka BrokerLoader or Vetta Loader) from a remote server via another intermedia PowerShell script hosted on Vimeo.


Understanding Architectures for Multi-Region Data Residency

A critical principle in the context of multi-region deployments is establishing clarity on truth and trust. While knowing the source of truth for a piece of data is universally important, it becomes especially crucial in multi-region scenarios. Begin by identifying a fundamental unit, an "atom," within which all related data resides in one region. This could be an organizational entity like a company, a team, or an organization, depending on your business structure. Any operation that involves crossing these atomic boundaries inherently becomes a cross-region scenario. Therefore, defining this atomic unit is essential in determining the source of truth for your multi-region deployment. In terms of trust, as different regions hold distinct data, communication between them becomes necessary. This could involve scenarios like sharing authentication tokens across regions. The level of trust between regions is a decision rooted in the specific needs and context of your business. Consider the geopolitical landscape if governments are involved, especially if cells are placed in regions with potentially conflicting interests.


Developing a Data Literacy Program for Your Organization

Before developing a data literacy program for an organization, it is crucial to conduct a comprehensive training needs assessment. This assessment helps in understanding the current level of data literacy within the organization and identifying areas that require improvement. It involves gathering information about employees’ existing knowledge, skills, and attitudes toward data analysis and interpretation. To conduct the needs assessment, different methods can be employed. Surveys, interviews, focus groups, or even analyzing existing data can provide valuable insights into employees’ proficiency levels and their specific learning needs. By involving various stakeholders, such as managers, department heads, and employees themselves, in this process, a holistic understanding of the organization’s requirements can be achieved. ... It is also beneficial to compare the program’s outcomes against predefined benchmarks or industry standards. This allows organizations to benchmark their progress against other similar initiatives and identify areas where further improvements are necessary. Overall, continuously evaluating the effectiveness of a data literacy program helps organizations understand its impact on individuals’ capabilities and organizational performance.


Women In Architecture: Early Insights and Reflections

The question of why there so few women in architecture is a key one in our minds. Rather than dwelling on the negative, the conversations focus on identifying the root causes to help us move into action effectively. I have learned that the answer to this question is incredibly nuanced and layered, with many interrelated factors. Some root causes for fewer women in architecture draw from the macro level context, including a similar set of challenges experienced by women in technology. However, one of the biggest contributors is the architecture profession itself and how it is presented. This has been a hard truth that has asserted itself as a common thread throughout the conversations. For example, the lack of clarity regarding the role and value proposition of architecture, often perceived as abstract, technical, and unattainable, poses a substantial barrier. ... However, there is a powerful correspondence between the momentum for more diversity in architecture and exactly what the profession needs most now. For architects of the future to thrive, it’s not enough to excel at cognitive, architectural, and technical competencies, but just as important to master the human competencies such as communication, influence, leadership, and emotional intelligence.


New York Times Versus Microsoft: The Legal Status of Your AI Training Set

One of the problems the tech industry has had from the start is product contamination using intellectual property from a competitor. The tech industry is not alone, and the problem of one company illicitly acquiring the intellectual property of another and then getting caught goes back decades. If an engineer uses generative AI that has a training set contaminated by a competitor’s intellectual property, there is a decent chance, should that competitor find out, that the resulting product will be found as infringing and be blocked from sale -- with the company that had made use of that AI potentially facing severe fines and sanctions, depending on the court’s ruling. ... Ensuring any AI solution from any vendor contains indemnification for the use of their training set or is constrained to only use data sets that have been vetted as fully under your or your vendor’s legal control should be a primary requirement for use. (Be aware that if you provide AI capabilities to others, you will find an increasing number of customers will demand indemnification.) You’ll need to ensure that the indemnification is adequate to your needs and that the data sets won’t compromise your products or services under development or in market so your revenue stream isn’t put at risk.


How to calculate TCO for enterprise software

It’s obvious that hardware, once it has reached end-of-life, needs to be disposed of properly. With software, there are costs as well, primarily associated with data export. First, data needs to be migrated from the old software to the new, which can be complex given all the dependencies and database calls that might be required for even a single business process. Then there’s backups and disaster recovery. The new software might require that data to be formatted in a different way. And you still might need to keep archived copies of certain data stores from the old system for regulatory or compliance reasons. Another wrinkle in the TCO calculation is estimating how long you plan to use the software. Are you an organization that doesn’t change tech stacks if it doesn’t have to and therefore will probably run the software for as long as it still does the job? In that case, it might make sense to do a five-year TCO analysis as well as a 10-year version. On the other hand, what if your company has an aggressive sustainability strategy that calls for eliminating all of its data centers within three years, and moving as many apps as possible to SaaS alternatives. 



Quote for the day:

"One advantage of talking to yourself is that you know at least somebody's listening." -- Franklin P. Jones

Daily Tech Digest - January 31, 2024

Rethinking Testing in Production

With products becoming more interconnected, trying to accurately replicate third-party APIs and integrations outside of production is close to impossible. Trunk-based development, with its focus on continuous integration and delivery, acknowledges the need for a paradigm shift. Feature flags emerge as the proverbial Archimedes lever in this transformation, offering a flexible and controlled approach to testing in production. Developers can now gradually roll out features without disrupting the entire user base, mitigating the risks associated with traditional testing methodologies. Feature flags empower developers to enable a feature in production for themselves during the development phase, allowing them to refine and perfect it before exposing it to broader testing audiences. This progressive approach ensures that potential issues are identified and addressed early in the development process. As the feature matures, it can be selectively enabled for testing teams, engineering groups or specific user segments, facilitating thorough validation at each step. The logistic nightmare of maintaining identical environments is alleviated, as testing in production becomes an integral part of the development workflow.


Enterprise Architecture in the Financial Realm

Enterprise architecture emerges as the North Star guiding banks through these changes. Its role transcends being a mere operational construct; it becomes a strategic enabler that harmonizes business and technology components. A well-crafted enterprise architecture lays the foundation for adaptability and resilience in the face of digital transformation. Enterprise architecture manifests two key characteristics: unity and agility. The unity aspect inherently provides an enterprise-level perspective, where business and IT methodologies seamlessly intertwine, creating a cohesive flow of processes and data. Conversely, agility in enterprise architecture construction involves deconstruction and subsequent reconstruction, refining shared and reusable business components, akin to assembling Lego bricks. ... Quantifying the success of digital adaptation is crucial. Metrics should not solely focus on financial outcomes but also on key performance indicators reflecting the effectiveness of digital initiatives, customer satisfaction, and the agility of operational models.


Cloud Security: Stay One Step Ahead of the Attackers

The relatively easy availability of cloud-based storage can lead to a data sprawl that is uncontrolled and unmanageable. In many cases, data which must be deleted or secured is left ungoverned, as organizations are not aware of their existence. In April 2022, cloud data security firm, Cyera, found unmanaged data store copies, and snapshots or log data. The researchers from this firm found out that 60% of the data security issues present in cloud data stores were due to unsecured sensitive data. The researchers further observed that over 30% of scanned cloud data stores were ghost data, and more than 58% of these ghost data stores contained sensitive or very sensitive data. ... Despite best practices advised by cloud service providers, data breaches that originate in the cloud have only increased. IBM’s annual Cost of a Data Breach report for example, highlights that 45% of studied breaches have occurred in the cloud. What is also noteworthy is that a significant 43% of reporting organizations which have stated they are just in the early stages or have not started implementing security practices to protect their cloud environments, have observed higher breach costs.


Five Questions That Determine Where AI Fits In Your Digital Transformation Strategy

Once you understand the why and the what, only then can you consider how your organization can use insights from AI to better accomplish its goals. How will your people respond, and how will they benefit? Today’s organizations have multiple technology partners, and they may have many that are all saying they can do AI. But how will your organization work with all those partners to make an AI solution come together? Many organizations are developing AI policies to define how it can be used. Having these guardrails ensures that your organization is operating ethically, morally and legally when it comes to the use of AI. ... It’s important to consider whether your organization is truly ready for AI at an enterprise or divisional level before deciding to implement AI at scale. Pilot projects can help you determine whether the implementation is generating the intended results and better understand how end users will interact with the processes. If you can't achieve customization and personalization across the organization, AI initiatives will be much tougher to implement.


A Dive into the Detail of the Financial Data Transparency Act’s Data Standards Requirements

The act is a major undertaking for regulators and regulated firms. It is also an opportunity for the LEI, if selected, to move to another level in the US, which has been slow to adopt the identifier, and significantly increase numbers that will strengthen the Global LEI System. While industry experts suggest regulators in scope of FDTA, collectively called Financial Stability Oversight Council (FSOC) agencies, initially considered data standards including the LEI and Financial Instrument Global Identifier published by Bloomberg, they suggest the LEI is the best match for the regulation’s requirements for ‘Covered agencies to establish “common identifiers” for information reported to covered regulatory agencies, which could include transactions and financial products/instruments.” ... The selection and implementation of a reporting taxonomy is more challenging as it will require many of the regulators to abandon existing reporting practices often based on PDFs, text and CSV files, and replace these with electronic reporting and machine-readable tagging. XBRL fits the bill, say industry experts, although there has been pushback from some agencies that see the unfunded requirement for change as too great a burden.


Data Center Approach to Legacy Modernization: When is the Right Time?

Legacy systems can lead to inefficiencies in your business. If we take one of the parameters mentioned above, such as cooling, one example of inefficiency could lie within an old server that’s no longer of use but still turned on. This could be placing unneccesary strain on your cooling, thus impacting your environmental footprint. Legacy systems may no longer be the most appropriate for your business, as newer technologies emerge that offer a more efficient method of producing the same, or better, results. If you neglect this technology, you might be giving your competitors an advantage which could be costly for your business. ... A cyber-attack takes place every 39 seconds, according to one report. This puts businesses at risk of losing or compromising not only their intellectual property and assets but also their customer’s data. This could put you at risk of damaging your reputation and even facing regulation fines. One of the best reasons to invest in digital transformation is for the security of your business. Systems that no longer receive updates can become a target of cyber-attacks and act as a vulnerability within your technology infrastructure. 


4 paths to sustainable AI

Hosting AI operations at a data center that uses renewable power is a straightforward path to reduce carbon emissions, but it’s not without tradeoffs. Online translation service Deepl runs its AI functions from four co-location facilities: two in Iceland, one in Sweden, and one in Finland. The Icelandic data center uses 100% renewably generated geothermal and hydroelectric power. The cold climate also eliminates 40% or more of the total data center power needed to cool the servers because they open the windows rather than use air conditioners, says Deepl’s director of engineering Guido Simon. Cost is another major benefit, he says, with prices of five cents per KW/hour compared to about 30 cents or more in Germany. The network latency between the user and a sustainable data center can be an issue for time-sensitive applications, says Stent, but only in the inference stage, where the application provides answers to the user, rather than the preliminary training phase. Deepl, with headquarters in Cologne, Germany, found it could run both training and inference from its remote co-location facilities. “We’re looking at roughly 20 milliseconds more latency compared to a data center closer to us,” says Simon.


Can ChatGPT drive my car? The case for LLMs in autonomy

Autonomous driving is an especially challenging problem because certain edge cases require complex, human-like reasoning that goes far beyond legacy algorithms and models. LLMs have shown promise in going beyond pure correlations to demonstrating a real “understanding of the world.” This new level of understanding extends to the driving task, enabling planners to navigate complex scenarios with safe and natural maneuvers without requiring explicit training. ... Safety-critical driving decisions must be made in less than one second. The latest LLMs running in data centers can take 10 seconds or more. One solution to this problem is hybrid-cloud architectures that supplement in-car compute with data center processing. Another is purpose-built LLMs that compress large models into form factors small enough and fast enough to fit in the car. Already we are seeing dramatic improvements in optimizing large models. Mistral 7B and Llama 2 7B have demonstrated performance rivaling GPT-3.5 with an order of magnitude fewer parameters (7 billion vs. 175 billion). Moore’s Law and continued optimizations should rapidly shift more of these models to the edge.


The Race to AI Implementation: 2024 and Beyond

The biggest problem is that the competitive and product landscape will be undergoing massive flux, so picking a strategic solution will be increasingly difficult. Younger companies that are less likely to be able to handle the speed of these advancements should focus on openness so that if they fail, someone else can pick up support, interoperability, and compatibility. If you aren’t locked into a single vendor’s solution and can mix and match as needed, you can move on or off a platform based on your needs. Like any new technology, take advice about hardware selection from the platform supplier. This means that if you are using ChatGPT, you want to ask OpenAI for advice about new hardware. If you are working with Microsoft or Google or any other AI developer, ask them what hardware they would recommend. ... You need a vendor that embraces all the client platforms for hybrid AI and one with a diverse, targeted solution set that individually focuses on the markets your firm is in. Right now, only Lenovo seems to have all the parts necessary thanks to its acquisition of Motorola.



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates

Daily Tech Digest - January 30, 2024

Most cloud-based genAI performance stinks

Generative AI systems often comprise various components. They include data ingestion services, storage, computing, and networking. Architecting these components to work synergistically often leads to overcomplexity, where performance issues, determined by the poorest performing components, are different from isolating. I’ve seen poorly performing networks and saturated databases. Those things are not directly related to generative AI, but they can cause performance problems, nonetheless. ... Protecting AI models and their data against unauthorized access and breaches goes without saying, especially in cloud environments where multitenancy is common. Too many performance issues raise security risks. In many instances, security mechanisms, such as encryption, introduce performance issues that if not resolved will worsen as the data grows. Architecture and testing are your friends here. Take some time to understand how security affects generative AI performance. ... Related to security is adherence to data governance and compliance standards. They can impose additional layers of performance management complexity. Much like security, we need to figure out how to work with these requirements. 


Using AI and responsibility for data privacy

If the AI is a self-hosted solution without a connection to application programming interfaces (API) or other data flow to the developer/provider or other third parties, the user is likely to remain solely responsible under data protection law. The fact that the AI provider initially programmed and provided the AI system and determined the technical functionality and the algorithms used by the AI can hardly be sufficient for the AI provider to be held (co-)responsible. It is correct that with the programming the AI provider already specifies the data processing (the means) initiated later by the user, which the user adopts in the context of the subsequent concrete data processing. However, this is the case with all software and therefore cannot be deemed decisive for the role of the controller. If, on the other hand, the AI is a Software-as-a-Service (SaaS) or AI-as-a-Service and the AI provider is still involved in the data processing initiated by the user, the AI provider is at least one potential additional operator in the circle of possible controllers. However, this does not automatically make the AI provider the controller of the data processing carried out by the AI within the meaning of the GDPR.


Transformative technology trends coming to the fore in 2024

Human machine interface (HMI) will transform the way people behave in various scenarios – this includes how drivers interact with their cars, engineers work with heavy machinery, laboratory technicians operate in hazardous environments, and much more. Advanced HMI won’t all be about gesture recognition, however. Expect to see greater adoption of natural voice interfaces around the world, as AI enables more native language interactions with virtual assistants and chatbots. This should finally break the barrier that kept millions (if not billions) of potential customers away from technologies such as home assistants, which only operate in a few selected languages. ... We also need to keep a watchful eye on Gen AI for more nefarious reasons too. There is a high probability that malicious actors will also co-opt this technology to create computer viruses – leading to a surge in malware. AI is not the only cyber security concern, however. We are also seeing major developments in quantum computing. This could enable hackers to break encryptions that would currently take years to break, within minutes. 


Business privacy obligations hard to understand

Jo Stewart-Rattray, Oceania Ambassador, ISACA said the results are worrying and are cause for major concern globally, particularly around budget deficits, low confidence and lack of compliance clarity. “Every organisation in ANZ and across the world, from SMEs through to enterprise, has a responsibility to protect the privacy of its customer and stakeholder data, and many governments including Australia’s Federal government, are updating legislation to ensure best practice,” said Ms Stewart-Rattray. “It is paramount that organisations understand what is expected of them in order to devise an effective privacy policy and implement accordingly. Then will they be able to realise the benefits of embedding privacy practices in digital transformation from the outset, including customer loyalty, reputational and financial performance.” ... “When privacy teams face limited budgets and skills gaps among their workforce, it can be even more difficult to stay on top of ever evolving and expanding data privacy regulations and even increase the risk of data breaches,” says Safia Kazi, ISACA principal, privacy professional practices. 


US-based cloud companies may need to reveal client details

The proposed change can restrict the pace of innovation in the Chinese AI ecosystem as the Chinese AI developers may be subjected to greater scrutiny by the US Government. “On the other hand, for local alternatives like Baidu ERNIE, Alibaba Tongyi Qianwen, Tencent Hunyuan, Huawei Pangu, Zhipu GLM, and Baichuan, this becomes important leverage for them to focus on their innovation despite the performance gap. It will also force Chinese vendors and enterprises to further prioritize localization, accelerating the evolution of AI software and hardware ecosystem in the long run,” said Charlie Dai, Vice President and Principal Analyst at Forrester. The restrictions may have implications for the global AI ecosystem as well. “In general, this will cast a shadow over the global AI ecosystem. Firstly, foreign companies, particularly those from China, may face greater scrutiny and oversight from the US government. This increased attention could lead to delays, additional costs, and potential restrictions on the development and deployment of AI applications,” Dai said. In addition, the requirement to disclose sensitive information about technology, data usage, and business operations can raise significant concerns about IP protection. 


Great security or great UX? Both, please

A security step-up should be used only for higher-risk scenarios, such as: anomalous behavior or sensitive actions like purchasing a product, changing passwords or account information, or inputting financial details into a form. The average user of a B2B SaaS app should go months without running into a security step-up. Recognize when they make sense and get rid of those that don’t. Fewer steps are more secure because users will not become numb to the situation. In contrast, sparsely used step-ups will be perceived as an indication of a riskier environment or action that requires more care. Be smart, as well, about when you have strong enough information not to warrant a step up. For example, if a user logs in with strong 2FA like a security token and immediately goes into a sensitive process, a step-up may not be warranted because the session is short, and the authentication is recent. How you do a step up, as well, is crucial. First, tell the user why you are asking for additional information. Second, make it easy for them to follow the process by explaining precisely what will happen in the step-up and providing visual cues like breadcrumbs.


Mastering the data science gamble: Strategies for success in a volatile landscape

One of the most significant pitfalls companies face is the blind adoption of data science merely because it is the industry buzzword. Visionary implementation should not be about following trends; it should be about understanding the unique needs of the business and aligning data science initiatives with strategic goals. Companies need a clear roadmap, a vision that transcends the charm of technology trends and fads. Without a precise vision, data science initiatives are equivalent to a ship without a destination, drifting aimlessly amidst the digital sea. A well-defined strategy, coupled with risk mitigation techniques, ensures that data science efforts are not futile ventures but powerful tools driving tangible outcomes. Moreover, the landscape of data science is ever-changing. Adopting an agile approach, where hypotheses are tested rapidly, allows for quick iterations and adjustments. Being nimble in experimentation provides the flexibility to adapt models in response to evolving market demands. Rapid prototyping and experimentation allow businesses to fail fast, learn, and refine their approaches swiftly.


Ransomware’s Impact Could Include Heart Attacks, Strokes & PTSD

The psychological harm of ransomware attacks on staff is intense and is often overlooked. Considerable stress for the individuals involved in responding to ransomware attacks can lead companies to hire a post traumatic stress disorder support team. Higher levels of employees suffer from stress due to financial concerns, while middle management suffers from stress caused by extremely long workdays, including particularly stressful communications with the threat actor. IT teams are the main victims, as they suffer from extreme workday conditions and feel a direct responsibility for protecting the organization’s systems. ... Victims of ransomware attacks rarely share their experiences. In the best case, companies share an incident response report publicly to help other organizations improve their defense but also often to show their customers that they have handled the threat in a responsive way, yet a lot of organizations stay silent for various reasons: reputational concerns, fear or legal reasons. ... As stated by the RUSI in the report, “there is a real human impact to ransomware attacks that is yet to be fully grasped and measured.”


Distributed Applications Need a Consistent Security Posture

With applications and APIs being made available across clouds and on-premises data centers, a comprehensive approach to security must include an authentication platform that is flexible and extensible and that functions with the various clients required to use it. The zero trust security model framework requires per-application authentication instead of a single network-level authentication that gives access to all. It doesn’t matter if you choose a third-party identity provider or go the service provider route, but it’s important to provide a consistent authentication experience. Application end users get confused when they encounter different login experiences across different applications, and this allows attackers to attempt to capture credentials from unsuspecting employees and customers. Many developers build the authentication layer into their applications and APIs, which leads to security posture inconsistencies due to varying skill levels among developers, lack of standardization and haphazard policy enforcement, and also increases development time and costs significantly. 


We Have Only Begun To Scratch The Surface Of AI’s True Innovative Power

The potential applications that will arise through AI “are vast and can potentially transform various industries—from healthcare and education to finance and retail," says Huang. AI-driven applications such as ChatGPT or AI Copilot "are redefining user access to information, enabling a more efficient and intuitive experience in place of traditional methods such as web searching. The advance of multi-modal AI models has created a new paradigm of opportunities for businesses around context generation and retrieval." It means there will be new and far more intuitive ways of dealing with computers. With recent AI progress in large language models, "in addition to voice and video generation, we will likely see new businesses, focused on providing more natural and human-centric interactions between humans and machines, sprouting up," Huang predicts. Business leaders across the spectrum recognize that we are only starting to recognize what AI — fused with other concepts — can deliver. “AI can act as a powerful tool for serendipity, connecting disparate information and fostering unexpected discoveries,” says Bownes.



Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry

Daily Tech Digest - January 29, 2024

Seven critical components of new performance management

With many aspects of performance, upfront clarity is needed about the target, standard, and minimum acceptable levels. General criteria such as “5 SMART Objectives” etc risk constraining top performers or providing insufficient clarity to poor performers or those in developmental stages. General organisation-wide processes should be seen by managers as minimum requirements, not the best. Expectations should be calibrated for fairness at this stage—like setting a handicap before the metaphorical contest begins, not after the contest has ended. Monitoring and measuring is about ensuring that both the manager and the employee are engaged in monitoring and measuring all key aspects of performance (WHAT, HOW, and GROWTH). Only then will each individual receive sufficient, timely, and useful feedback to support improvement. This element also ensures that future assessment can be evidence-based. Enabling and enhancing is the key to performance management and oftentimes given insufficient attention. We know that every interaction between a manager and a member of staff can have a significant impact on that individual’s motivation and performance. 


How Are Regulators Reacting to the Speed of AI Development?

“The speed of AI development is incredibly exciting, as the finance industry stands to benefit in several ways. But we’d be naive to think such rapid technological change cannot outstrip the speed at which regulations are created and implemented. “Ensuring AI is adequately regulated remains a huge challenge. Regulators can start by developing comprehensive guidelines on AI safety to guide researchers, developers and companies. This will also help establish grounds for partnerships between academia, industry and government to foster collaboration in AI development, which brings us closer to the safe deployment and use of AI. “We can’t forget that AI is a new phenomenon in the mainstream, so we must see more initiatives to educate the public about AI and its implications, promoting transparency and understanding. It’s vital that regulators make such commitments but also pledge to fund research into AI safety and best practices. To see AI’s rapid acceleration as advantageous, and not risk reversing the fantastic progress already made, proper funding for research is non-negotiable.”


Russia hacks Microsoft: It’s worse than you think

This time around, though, Midnight Blizzard didn’t have to build a sophisticated hacking tool. To attack Microsoft, it used one of the most basic of basic hacking tricks, “password spraying.” In it, hackers type commonly-used passwords into countless random accounts, hoping one will give them access. Once they get that access, they’re free to roam throughout a network, hack into other accounts, steal email and documents, and more. In a blog post, Microsoft said Midnight Blizzard broke into an old test account using password spraying and then used the account’s permissions to get into “Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions,” and steal emails and documents attached to them. The company claims the hackers initially targeted information about Midnight Blizzard itself, and that “to date, there is no evidence that the threat actor had any access to customer environments, production systems, source code, or AI systems.” As if to reassure customers, the company noted, “The attack was not the result of a vulnerability in Microsoft products or services.”


Prioritizing Data: Why a Solid Data Management Strategy Will Be Critical in 2024

Good decisions rely on shared data, especially the right data at the right time. Sometimes, the challenge is that the data itself often raises more questions than it answers. This trend will continue to worsen before it improves, as disjointed data ecosystems with disparate tools, platforms, and disconnected data silos become increasingly challenging for enterprises. This is why the concept of a data fabric has emerged as a method to better manage and share data. Data fabric’s holistic goal is the culmination of data management tools designed to manage data from identification, access, cleaning, and enrichment to transformation, governance, and analysis. That is a tall order and will take several years to mature before adoption happens across enterprises. Current solutions were not fully developed to deliver all the promises of a data fabric. In the coming year, organizations will incorporate knowledge graphs and artificial intelligence for metadata management to improve today’s offerings, and these will be a key criterion for making them more effective. Semantic metadata will enable decentralized data management, following the data mesh paradigm. 


Transforming IT culture for business success

The “Creatorverse” work environment fosters creativity and collaboration through its blend of virtual work and state-of-the art physical workspaces, Wenhold says. “All of this keeps our culture alive and keeps Business Technology a destination department,” he adds. An obsessive focus on simplicity anchors the belief and value system underpinning IT culture at the Pacific Northwest National Laboratory (PNNL), according to Brian Abrahamson, associate lab director and chief digital officer for computing and IT. For years, the lab struggled under the weight of decentralized IT and government standards and regulations, which complicated procedures and spurred too many overly complex systems that didn’t talk to one another. Under Abrahamson’s direction, the IT organization spent the past decade embracing human-centered design principles, delivering mobile accessibility, and creating personalized and effortless consumer-grade experiences designed to create connections among scientists and give them ready access to a workbench primed for scientific discovery.


The top four governance, risk & compliance trends to watch in 2024

Financial institutions handle sensitive consumer data every day, which is a responsibility integral to maintaining the trust consumers place in banks, credit unions, and similar entities. Safeguarding this data is not only a critical duty but also subject to rigorous regulation. The gravity of this responsibility is underscored by the potential ramifications of cyber incidents, which not only jeopardise consumer information but also strain a financial institution’s technological infrastructure. The fallout may include financial losses, reputational damage, and legal consequences. While many organisations have existing cybersecurity plans and incident response programs, the focus in 2024 is expected to shift towards rigorous testing. The dynamic nature of cybersecurity threats necessitates a proactive approach to ensure these plans and programs remain effective in the face of evolving challenges. Financial institutions may increasingly turn to external consultants for assistance in developing cybersecurity incident response policies or reviewing existing plans to ensure alignment with regulatory requirements.


5 ways tech leaders can increase their business acumen

There’s an opportunity to help business stakeholders advance their technical acumen and use the dialog to develop a shared understanding of problems, opportunities, and solution tradeoffs. Humberto Moreira, principal solutions engineer at Gigster, says, “The opportunity to interact directly with technologists can also give business stakeholders a useful peek behind the curtain at how tools they use every day are developed, so this meeting of the minds can be mutually beneficial to these two groups that don’t always communicate as well as they should.” ... Engineers must recognize the scale and complexity of automation before jumping into solutions. Following one user’s journey is insufficient requirements gathering when re-engineering a complex workflow involving many people and multiple departments using a mix of technologies and manual steps. Technology teams should follow six-sigma methodologies for these challenges by documenting process flows, measuring productivity, and capturing quality defect metrics as key steps to developing business acumen before diving into automation opportunities.


AI in 2024: Should We Still Be “Moving Fast and Breaking Things”?

It was clear from the moment it arrived on the scene that generative AI’s proficiency with natural language was a gamechanger, opening up this technology to legal professionals in a way that simply wasn’t possible in the past. Additionally, as time goes on, generative AI is able to work with larger and larger blocks of text. The days when the generative AI models could only handle 1000 words are in the rearview mirror; they can now handle 200,000 words. ... The best bet here is to look for vendors with an in-depth understanding of daily legal workflows combined with an understanding of which areas would actually benefit from AI as a way to streamline, accelerate, or otherwise enhance those workflows. After all, some workflows just need some Excel rules or some other “low tech” solution – while others scream out for the efficiency that AI can bring. Established vendors with domain expertise will understand these nuances. ... An old adage in Silicon Valley famously advises companies to “move fast and break things.” There was a little bit of that mindset over the past year, as firms jumped into generative AI because it was the technology of the moment, and no one wanted to seem like they were behind the curve for such a groundbreaking new technology.


eDiscovery and Cybersecurity: Protecting Sensitive Data Throughout Legal Proceedings

In today’s digital world, hackers are a constant threat to the security of sensitive data found in legal proceedings. Even law firm computer systems can be vulnerable to a hacker attack. Hackers who harbor malicious intent could then turn around and take advantage of the stolen data, using it to steal others’ identities, commit financial fraud, or even worse. ... Law firms and attorneys are responsible for keeping client data safe and meeting privacy regulations. Not doing so results in liability lawsuits, charges of professional malpractice, and even the loss of customer confidence. Implications springing from data breaches in law don’t stop there, however. Lawsuits brought by affected individuals or regulatory bodies are a potential legal consequence of data breaches. These lawsuits can bring huge penalties for damages; they have sunk even the most inveterate firm. Legal professionals involved in a data breach also may face professional sanctions, potentially including suspension or revocation of their licenses. Ethically, the mishandling of sensitive data goes against the principles of client confidentiality and trust. 


Prioritizing cybercrime intelligence for effective decision-making in cybersecurity

Given the vast amount of cybercrime intelligence data generated daily, it is crucial for security teams to effectively prioritize the information they use for decision-making. ⁤⁤ To do this, I recommend security teams conduct regular risk assessments that should consider the organization’s risk profile, considering historical data and similar companies in their industry. ⁤ ⁤Once the risk profile is created, security teams can leverage the most suitable threat intelligence feeds and sources. ⁤ ⁤Evaluation of these risks should not be static but rather a continuous process that allows teams to regularly review and update their priorities based on the evolving threat landscape.  ... To have a balance between gathering cybercrime intelligence and respecting privacy and adhering to legal considerations, organizations need to follow strict legal compliance, including data protection laws. Organizations should also minimise the collection of sensitive information and focus only on essential data, and establish clear ethical guidelines for their intelligence gathering activities.



Quote for the day:

''Leaders draw out ones individual greatness.'' -- John Paul Warren