Daily Tech Digest - February 05, 2024

8 things that should be in a company BEC policy document

Smart boards and CEOs should demand that CISOs include BEC-specific procedures in their incident response (IR) plans, and companies should create policies that require security teams to update these IR plans regularly and test their efficacy. As a part of that, security and legal experts recommend that organizations plan for legal involvement across all stages of incident response. Legal especially should be involved in how incidents are communicated with internal and external stakeholders to ensure the organization doesn’t increase its legal liability if a BEC attack hits. “Any breach may carry legal liability, so it’s best to have the discussion before the breach and plan as much as possible to address issues in advance rather than to inadvertently take actions that either causes liability that might not otherwise have existed, or increases liability beyond what would have existed,” Reiko Feaver, a privacy and data security attorney and partner at Culhane Meadows, tells CSO. Feaver, who advises clients on BEC best practices, training and compliance, says BEC policy documents should stipulate that legal be part of the threat modeling team, analyzing potential impacts from different types of BEC attacks so the legal liability viewpoint can be folded into the response plan.


Many Employees Fear Being Replaced by AI — Here's How to Integrate It Into Your Business Without Scaring Them.

The first goal of integrating AI should be understanding the quickest way for it to start having a positive monetary benefit. While our AI project is still a work in progress, we are expecting to increase revenue anywhere from $2 million to $20 million as a result of a first round of investment of under $100,000. But to achieve that type of result, leaders need to get comfortable with AI and figure out the challenges and complexities they might encounter. ... If you are a glass-half-full kind of person, listening to the glass-half-empty kind of person offers a complementary point of view. Whenever I have ideas to really move the numbers, I tend to act fast. It is crucial that people understand that I am not fast-tracking AI integration because I am unhappy with our current process or people. It is because I am happy that I will not risk what we already have unless I am fully sold on the range of the upside — and I want to expedite the learning process to get to those benefits faster. I still want to talk to as many people as I can — employees, developers, marketing folks, product managers, external investors — both for the tone of responses and any major issues. Those red flags may be great things to consider or I need to give people more information. Either way, my response can alleviate their concerns. 


The role of AI in modernising accounting practices

Accountants, like any other professionals, have varied views on AI—some see it as a friend, appreciating its ability to automate tasks, enhance efficiency, and reduce errors. They view AI as a valuable ally, freeing up time for strategic and analytical work. On the flip side, others perceive AI as a threat, fearing job displacement and the loss of the human touch in financial decision-making. Striking a balance between leveraging AI’s benefits for efficiency while preserving the importance of human skills is crucial for successful integration into accounting practices. ... Notably, machine learning algorithms and natural language processing are gaining prominence, enabling accountants to delve into more sophisticated tasks such as intricate data analysis, anomaly detection, and the generation of actionable insights from complex datasets. As technology continues to evolve, the trajectory of AI in accounting is expected to expand further. Future developments might include more sophisticated predictive analytics, enhanced natural language understanding for improved communication, and increased automation of compliance-related tasks. 


10 ways to improve IT performance (without killing morale)

When working to improve IT performance, leaders frequently focus on the technology instead of zeroing in on the business process. “We are usually motivated to change what’s within the scope of our control because we can move more quickly and see results sooner,” says Matthew Peters, CTO at technology services firm CAI. Yet a technology-concentrated approach can create significant risk, such as breaking processes that lie outside of IT or overspending on solutions that may only perpetuate the issue that still must be resolved. ... A great way to improve IT performance while maintaining team morale is by developing a culture of collaboration, says Simon Ryan, CTO at network management and audit software firm FirstWave. “Encourage team members to communicate openly — listen to their concerns and provide opportunities for skill development,” he explains. “This strategy is advantageous because it links individual development to overall team performance, thereby fostering a sense of purpose.” Ignoring the human factor is the most common team-related blunder, Ryan says. “An overemphasis on tasks and deadlines without regard for the team’s well-being can lead to burnout and unhappiness,” he warns. 


How Digital Natives are Reshaping Data Compliance

With their forward-thinking mindsets, today's chief compliance officers are changing the perception of emerging technologies from threats to opportunities. Rather than reacting with outright bans, they thoughtfully integrate new tools into the compliance framework. This balances innovation with appropriate risk management. It also positions compliance as an enabler of progress rather than a roadblock. The benefits of this mindset are many: A forward-thinking culture that thoughtfully integrates innovations into business processes and compliance frameworks. This allows organizations to harness the benefits of technology ethically. With an opportunistic mindset, compliance teams can explore how new tools like AI, blockchain, and automation can be used to make compliance activities more effective, efficient and data driven. When seen as working alongside business leaders to evaluate risks and implement appropriate guardrails for new tech, compliance teams’ collaborative approaches enable progress and innovation. These new technologies open up possibilities to continuously improve and modernize compliance programs. An opportunity-driven perspective seizes on tech's potential.


How to choose the right NoSQL database

Before choosing a NoSQL database, it's important to be certain that NoSQL is the best choice for your needs. Carl Olofson, research vice president at International Data Corp. (IDC), says "back office transaction processing, high-touch interactive application data management, and streaming data capture" are all good reasons for choosing NoSQL. ... NoSQL databases can break down data into segments—or shards—which can be useful for large deployments running hundreds of terabytes, Yuhanna says. “Sharding is an essential capability for NoSQL to scale databases,” Yuhanna says. “Customers often look for NoSQL solutions that can automatically expand and shrink nodes in horizontally scaled clusters, allowing applications to scale dynamically.” ... Some NoSQL databases can run on-premises, some only in the cloud, while others in a hybrid cloud environment, Yuhanna says. “Also, some NoSQL has native integration with cloud architectures, such as running on serverless and Kubernetes environments,” Yuhanna says. “We have seen serverless as an essential factor for customers, especially those who want to deliver good performance and scale for their applications, but also want to simplify infrastructure management through automation.”


What’s Coming in Analytics (And How We’ll Get There)

The notion of composability is not just a buzzword; it's the cornerstone of modern application development. The industry is gradually moving towards a more composable enterprise, where modular, agile products integrate insights, data, and operations at their core. This transition facilitates the creation of innovative experiences tailored to user needs, significantly lowering development costs, accelerating time to market and fostering a thriving generative AI ecosystem. This more agile application development environment will also lead to a convergence of AI and BI, such that AI-powered embedded analytics may even supplant current BI tools. This will lead to a more data-driven culture where the business uses real-time analytics as an integral part of its daily work, enabling more proactive and predictive decision-making. ... As we advance into the future, the analytics industry is poised on the edge of a monumental shift. This evolution is akin to discovering a new, uncharted continent in the realm of data processing and complex analysis. This exploration into unknown territories will reveal analytics capabilities far beyond our current understanding.


Businesses banning or limiting use of GenAI over privacy risks

Organizations recognize the need to reassure their customers about how their data is being used. “94% of respondents said their customers would not buy from them if they did not adequately protect data,” explains Harvey Jang, Cisco VP and Chief Privacy Officer. “They are looking for hard evidence the organization can be trusted as 98% said that external privacy certifications are an important factor in their buying decisions. These stats are the highest we’ve seen in Cisco’s privacy research over the years, proving once more that privacy has become inextricably tied to customer trust and loyalty. This is even more true in the era of AI, where investing in privacy better positions organizations to leverage AI ethically and responsibly.” Despite the costs and requirements privacy laws may impose on organizations, 80% of respondents said privacy laws have positively impacted them, and only 6% said the impact has been negative. Strong privacy regulation boosts consumer confidence and trust in the organizations where they share their data. Further, many governments and organizations implement data localization requirements to keep specific data within a country or region.


4 ways to help your organization overcome AI inertia

The research suggests the tricky combination of a fearful workforce and the unpredictability of the current regulatory environment means many organizations are still stuck at the AI starting gate. As a result, not only are pilot projects thin on the ground, but so are the basic foundations -- in terms of both data frameworks and strategies -- upon which these initiatives are created. About two-fifths (41%) of data leaders said they have little or no data governance framework, which is just a percentage higher than the previous year's Maturity Index, when 40% of data leaders said they have little or no data governance framework, which is a set of standards and guidelines that enable organizations to manage their data effectively. Just over a quarter of data leaders (27%) said their organization has no data strategy at all, which is only a slight improvement on the previous year's figure (29%). "I get why not everybody's quite there yet," says Carruthers, who, as a former CDO, understands the complexities involved in strategy and governance. ... The good news is some digital leaders are making headway. Andy Moore, CDO at Bentley Motors, is focused on building the foundations for the exploitation of emerging technologies, such as AI.


Data Lineage in Modern Data Engineering

There are generally two types of data lineage, namely forward lineage and backward lineage. Forward Lineage - It is known as downstream lineage; it tracks the flow of data from its source to its destination. It outlines the path that data takes through various stages of processing, transformations, and storage until it reaches its destination. It helps developers understand how data is manipulated and transformed, aiding in the design and improvement of the overall data processing workflow and quickly identifying the point of failure. By tracing the data flow forward, developers can pinpoint where transformations or errors occurred and address them efficiently. It is essential for predicting the impact of changes on downstream processes. ... Backward Lineage -  It is also known as upstream lineage; it traces the path of data from its destination back to its source. It provides insights into the origins of the data and the various transformations it undergoes before reaching its current state. It is crucial for ensuring data quality by allowing developers to trace any issues or discrepancies back to their source. By understanding the data's journey backward, developers can identify and rectify anomalies at their origin. 



Quote for the day:

“Nobody talks of entrepreneurship as survival, but that’s exactly what it is.” -- Anita Roddick

Daily Tech Digest - February 04, 2024

Prepare now for when quantum computers break biometric encryption: Trust Stamp

While experts expect quantum computers will not be able to scale to defeat such systems for at least another ten years, the white paper claims, entities should address “harvest now, decrypt later” (HNDL) attacks proactively. Through an HNDL approach, an attacker could capture encrypted data pending the availability of quantum computing-enabled decryption. It is worth noting that this cyber threat would be heavily resource-intensive to perform. Such an attack would most likely only be feasible by a nation-state and would target information that would remain extremely valuable for decades in the future. Still, HDNL is an especially concerning threat for biometric PII, due to its relative permanence. Certain data encryption methods are particularly vulnerable. Asymmetric, or public-key cryptography, uses a public and private key to encrypt and decrypt information. One of the keys can be stored in the public domain, which enables connections between “strangers” to be established quickly. Because the keys are mathematically related, it is theoretically possible to calculate a private key from a public key. 


Managing the hidden risks of shadow APIs

In today's dynamic API landscape, maintaining comprehensive visibility into the security posture of API endpoints is paramount. All critical app and API security controls necessary to protect an app's entire ecosystem can be deployed and managed through the unified API security console of the F5 Distributed Cloud Platform. This allows DevOps and SecOps teams to observe and quickly identify suspected API abuse as anomalies are detected as well as create policies to stop misuse. This requires the use of ML models to create baselines of normal API usage patterns. Continuous ML-based traffic monitoring allows API security to predict and block suspicious activity over time. Deviations from these baselines and other anomalies trigger alerts or automated responses to detect outliers, including rogue and shadow APIs. Dashboards play a crucial role in providing the visibility required to monitor and assess the security of APIs. The F5 Distributed Cloud WAAP platform extends beyond basic API inventory management by presenting essential security information based on actual and attack traffic.


Cybersecurity Frontline: Securing India’s digital finance infrastructure in 2024

Fintech companies are progressively allowing AI to handle routine tasks, freeing human resources for more complex challenges. AI systems are also being used to simulate cyberattacks, testing systems for vulnerabilities. This shift highlights the critical role of AI and ML in modern cybersecurity, moving beyond mere automation to proactive threat detection and system fortification. The human element, often the weakest link in cybersecurity, is receiving increased attention. Fintech firms are investing in employee training to build resilience against cyberattacks, focusing on areas such as phishing, social engineering, and password security. One of the most notable advancements in this domain is the use of AI-powered fraud detection systems. For instance, a global fintech leader has implemented a deep learning model that analyses around 75 billion annual transactions across 45 million locations to detect and prevent card-related fraud. Despite, financial institutions keep on educating the customers on social engineering frauds, but the challenge is when customers willingly provide OTPs, payment/banking credentials which resulted misuse in the account.


The evolving challenge of complexity in cybersecurity

One of the biggest challenges when it comes to cybersecurity is the complexity that has evolved due to the need to use an increasing array of products and services to secure our businesses. This is largely due to the underlying complexity of our IT environments and the broad attack surface this creates. With the growing adoption of cloud and the more dispersed nature of our workforces, the perimeter approach to security that worked well in the 20th century is no longer adequate. In the same way the moats and castle walls of the Middle Ages gave good protection then but would not stand up to a modern attack, traditional firewalls and VPNs are no longer suitable now and invariably need to be augmented with lots of other layers of security tools. Modern, more flexible and (arguably) simpler zero-trust approaches such as secure access service edge, zero-trust network access and microsegmentation need to be adopted. These technologies ensure that access to applications and data, no matter where they reside, is governed by simple, identity based policies that are easy to manage while delivering levels of security and visibility that legacy approaches cannot.


CIOs rise to the ESG reporting challenge

To achieve success, CIOs must first understand how ESG reporting fits within the company’s business strategy, Sterling’s Kaur says. Then they need to engage and align with the right people in the organization. The CFO and CSO top that list, but CIOs should branch out further, as “upstream processes is where the vast majority of sustainability and ESG story really happens,” says Marsha Reppy, GRC technology leader for EY Global and EY Americas. “You will not be successful without procurement, R&D, supply chain, manufacturing, sales, human resources, legal, and tax at the table.” Because ESG data is broadly dispersed throughout the organization, CIOs will need broad consensus on an ESG reporting strategy, but the triumvirate of CIO, CFO, and CHRO should be driving ESG reporting forward, Kaur says. “Business goals matter, financials matter, and employee engagement matters,” she says. “Creating this partnership has the benefit of bringing a cohesive view forward with the right goals.” CIOs must also educate themselves on the nitty gritty of ESG reporting to fully understand the complexity and breadth of the problem they’re trying to solve, EY’s Reppy says.


How to Get Platform Engineering Just Right

In the land of digital transformation, seeing is believing, which is where observability has a role to play. Improving observability is crucial for gaining insights into the platform’s performance and behavior, which involves integrating tools like event and project monitoring, cloud cost transparency, application performance, infrastructure health and user interactions. In a rapidly growing cloud environment, observability enables teams to keep track of what is happening in terms of cost, usage, availability, performance and security across a constantly transforming cloud infrastructure. Once a project has been deployed, it needs to be managed and maintained across all cloud providers, something which is critical for keeping costs to a minimum but is often a huge and messy task. Managing this effectively requires monitoring key performance indicators (KPIs) and setting up alerts for critical events, and using logs and analysis tools to gain visibility into application behavior, track errors, and troubleshoot issues more effectively. Finally, implementing tracing systems that can track the flow of requests across various microservices and components helps to identify performance bottlenecks, understand latency issues and optimize system behavior.


AI Officer Is the Hot New Job That Pays Over $1 Million

Executives spearheading metaverse efforts at Walt Disney Co., Procter & Gamble Co. and Creative Artists Agency left. Leon's LinkedIn profile (yes, he had one), no longer exists, and there's no mention of him on the company's website, other than his introductory press release. Publicis Groupe declined to comment on the record. Instead, businesses are scrambling to appoint AI leaders, with Accenture and GE HealthCare making recent hires. A few metaverse executives have even reinvented themselves as AI experts, deftly switching from one hot technology to the next. Compensation packages average well above $1 million, according to a survey from executive-search and leadership advisory firm Heidrick & Struggles. Last week, Publicis said it would invest 300 million euros ($327 million) over the next three years on artificial intelligence technology and talent.Play Video "It's been a long time since I have had a conversation with a client about the metaverse," said Fawad Bajwa, the global AI practice leader at the Russell Reynolds Associates executive search and advisory firm. "The metaverse might still be there, but it's a lonely place."


Heart of the Matter: Demystifying Copying in the Training of LLMs

A characteristic of generative AI models is the massive consumption of data inputs, which could consist of text, images, audio files, video files, or any combination of the inputs (a case usually referred to as “multi-modal”). From a copyright perspective, an important question (of many important questions) to ask is whether training materials are retained in the large language model (LLM) produced by various LLM vendors. To help answer that question, we need to understand how the textual materials are processed. Focusing on text, what follows is a brief, non-technical description of exactly that aspect of LLM training. Humans communicate in natural language by placing words in sequences; the rules about the sequencing and specific form of a word are dictated by the specific language (e.g., English). An essential part of the architecture for all software systems that process text (and therefore for all AI systems that do so) is how to represent that text so that the functions of the system can be performed most efficiently. Therefore, a key step in the processing of a textual input in language models is the splitting of the user input into special “words” that the AI system can understand.


2024: The year quantum moves past its hype?

By contrast, today’s quantum computers are capable of a just few hundred error-free operations. This leap may sound like a return to the irrational exuberance of previous years. But there are many tangible reasons to believe. The quantum computing industry is now connecting these short-term testbeds with long-term moonshots as it starts to aim for middle-term, incremental goals. As we approach this threshold, we’ll start to more intrinsically understand errors and fix them. We can start to model simple molecules and systems, developing more powerful quantum algorithms. Then, we can work on more interesting (and impactful) applications with each new generation/testbed of quantum computer. What will those applications be? We don’t know. And that’s OK. ... But first we need to develop better quantum algorithms and QEC techniques. Then, we will need fewer qubits to run the same quantum calculations and we can unlock useful quantum computing, sooner. As progress and pace continues to accelerate, 2024 will be the year when the conversation around quantum applications has real substance as we follow tangible goals, commit to realistic ambitions and unlock real results.


Adaptive AI: The Promise and Perils for Healthcare CTOs

Adaptive AI is a subset of artificial intelligence that can learn and adjust its behavior based on new data and changing circumstances. Unlike traditional AI systems, which are static and rule-based, Adaptive AI algorithms can continually improve and adapt to evolving situations. This technology draws inspiration from the human brain's capacity for learning and adaptation. ... Adaptive AI plays a pivotal role in identifying and mitigating security threats. CTOs can leverage AI to monitor network traffic continuously, identify anomalies including software flaws and misconfigurations, and respond to threats in real time, bolstering their organization's security. It can prioritize these vulnerabilities based on the potential impact and likelihood of exploitation, allowing CTOs to allocate resources for patching and remediation efforts effectively. ... CTOs can drive innovation in customer engagement and personalization with Adaptive AI algorithms. In the case of virtual healthcare, Adaptive AI can be used to power virtual care platforms that allow patients to connect with healthcare providers from anywhere. This can improve access to care, especially for rural or underserved populations.



Quote for the day:

“Things work out best for those who make the best of how things work out.” -- John Wooden

Daily Tech Digest - February 03, 2024

NCA’s Plaggemier on Finding a Path to Data Privacy Compliance

On the international stage, companies are becoming more aware of the more active and robust policies they may face and the penalties they can carry. That has led to some patterns, Plaggemier says, developing around what is reasonable for companies to enact in relation to their sector and industry. “Do you have security or privacy tools or practices in place that are in line with your competitors?” she asks. While such an approach might be considered reasonable at first, competitors might be way ahead with much more mature programs, Plaggemier says, possibly making copying rivals no longer a reasonable approach and compelling companies to find other ways to achieve compliance. Data privacy regulations continue to gain momentum, and she believes it will be interesting to see what further kind of enforcement actions develop and how the courts in California, for example, manage. As CCPA and other state-level regulations continue into their sophomore eras, Plaggemier says at least a few more states seem likely to get on the bandwagon of data privacy regulation. Meanwhile, there is also some growing concern about how AI may play a role in potential abuses of data in the future.


What Is Enterprise Architecture? (And Why Should You Care About It)

Ideally, Enterprise Architecture supplies the context and insight to guide Solution Architecture. To address broad considerations, and align diverse stakeholder viewpoints, Enterprise Architecture often needs to be broader, less specific, and often less technical than Solution Architecture. ... Done well, Enterprise Architecture should provide long-term guidance on how different technology components support overall business objectives. It should not prescribe how technology is, or should be, implemented, but rather provide guardrails that help inform design decisions and prioritization. Additionally, most organizations have several technology components that support business operations; Salesforce is usually just one. Understanding how the various technology components work together will enable you to be a well-informed contributing member of a larger team. EA can help to provide valuable context about how Salesforce interacts with other systems and might spark ideas on how Salesforce specifically can be better utilized to support an organization.


AnyDesk says hackers breached its production servers, reset passwords

In a statement shared with BleepingComputer late Friday afternoon, AnyDesk says they first learned of the attack after detecting indications of an incident on their product servers. After conducting a security audit, they determined their systems were compromised and activated a response plan with the help of cybersecurity firm CrowdStrike. AnyDesk did not share details on whether data was stolen during the attack. However, BleepingComputer has learned that the threat actors stole source code and code signing certificates. The company also confirmed that the attack did not involve ransomware but didn't share too much information about the attack other than saying their servers were breached, with the advisory mainly focusing on how they responded to the attack. As part of their response, AnyDesk says they have revoked security-related certificates and remediated or replaced systems as necessary. They also reassured customers that AnyDesk was safe to use and that there was no evidence of end-user devices being affected by the incident. "We can confirm that the situation is under control and it is safe to use AnyDesk. 


The Ultimate 7-Step CEO Guide to Visionary Leadership

Unlike strategic objectives, which are rationally derived, visions are values-laden. They give meaning through an ideological goal. Since they are about what should be, they are, by definition, an expression of values and corporate identity. Thus, effective CEOs keep the vision malleable in relation to the business landscape but never change the values underneath. Not only that, but their personal values align with the organization and its vision — one reason for doing a values assessment in CEO succession. ... Some of the most catastrophic events in history have been the result of a psychopath's vision. Visions can be powerful, influential and morally corrupt — all at the same time. Conversely, real leaders create a vision that benefits the entire ecosystem, where the rising tide lifts all boats and makes the world a better place. Robert House, from the University of Pennsylvania, defined a greater good vision as "an unconscious motive to use social influence, or to satisfy the power need, in socially desirable ways, for the betterment of the collective rather than for personal self-interest." This is using the will to power for the betterment of humanity, to shape the future, rather than as a source of ruthless evildoing.


AI Revolutionizes Voice Interaction: The Dawn Of A New Era In Technology

So what can we do to make sure we’re ready for this universal shift to voice-controlled tech and having natural language conversations with machines? Dengel suggests the answer lies in meeting the challenge head-on. This means drawing together teams made of technologists, engineers, designers, communications experts and business leaders. Their core focus is to identify opportunities and potential risks to the business, allowing them to be managed proactively rather than reactively. “That’s always the first step,” he says, “because you start defining what’s possible, but you’re doing it in the context of what’s realistic as well because you’ve got your tech folks involved as well … ” It’s a “workshop” approach pioneered by Apple and adopted by various tech giants that have found themselves at the forefront of an emerging wave of transformation. But it’s equally applicable to just about any forward-looking business or organization that doesn’t want to be caught off-guard. Dengel says that addressing a group of interns recently, he told them, “I wish I were in your shoes – the next five years is gonna be more innovation than there’s been in the last five or maybe the last 20 years


Level up: Gamify Your Software Security

Gamification has been a great way to increase skills across the industry, and this has become particularly important as adversaries become more sophisticated and robust security becomes a critical piece to business continuity. ... We all love our extrinsic motivators, whether it’s stars or our green squares of activity on GitHub or even our badges and stickers in forums and groups. So why not create a reward system for security too? This makes it possible for developers to earn points, badges or status for successfully integrating security measures into their code, recognizing their achievements. ... Just as support engineers are often rewarded for the speed and volume of tickets they close, similar ideas can be used to advance security practices and hygiene in your organization. Use leaderboards to encourage a healthy competitive spirit and recognize individuals or teams for exceptional security contributions. ... This is in addition to the badges and other rewards mentioned above. I’ve seen recognition programs for other strategic initiatives in organizations, such as “Top Blogger” or “Top Speaker” and even special hoodies or swag awarded to those who achieve the title, giving it exclusivity and prestige.


802.11x: Wi-Fi standards and speeds explained

The big news in wireless is the expected ratification of Wi-Fi 7 (802.11be) by the IEEE standards body early this year. Some vendors are already shipping pre-standard Wi-Fi 7 gear, and the Wi-Fi Alliance announced in January that it has begun certifying Wi-Fi 7 products. While the adoption of Wi-Fi 7 is expected to have the most impact on the wireless market, the IEEE has been busy working on other wireless standards as well. In 2023 alone, the group published 802.11bb, a standard for communication via light waves; 802.11az, which significantly improves location accuracy; and 802.11bd for vehicle-to-vehicle wireless communication. Looking ahead, IEEE working groups are tackling new technology areas, such as enhanced data privacy (802.11bi), WLAN sensing (802.11bf), and randomized and changing MAC addresses (802.11bh). In addition, the IEEE has established special-interest groups to investigate the use of ambient energy harvested from the environment, such as heat, to power IoT devices. There’s a study group looking at standards for high-throughput, low-latency applications such as augmented reality/virtual reality. Another group is developing new algorithms to support AI/ML applications.


What is AI networking? Use cases, benefits and challenges

AI networking can optimize IT service management (ITSM) by handling the most basic level 1 and level 2 support issues (like password resets or hardware glitches). Leveraging NLP, chatbots and virtual agents can field the most common and simple service desk inquiries and help users troubleshoot. AI can also identify higher-level issues that go beyond step-by-step instructions and pass them along for human support. AI networking can also help reduce trouble ticket false-positives by approving or rejecting tickets before they are acted on by the IT help desk. This can reduce the probability that human workers will chase tickets that either weren’t real problems in the first place, were mistakenly submitted or duplicated or were already resolved. ... AI can analyze large amounts of network data and traffic and perform predictive network maintenance. Algorithms can identify patterns, anomalies and trends to anticipate potential issues before they degrade performance or cause unexpected network outages. IT teams can then act on these to prevent — or at least minimize — disruption. AI networking systems can also identify bottlenecks, latency issues and congestion areas. 


Low-Power Wi-Fi Extends Signals Up to 3 Kilometers

Morse Micro has developed a system-on-chip (SoC) design that uses a wireless protocol called Wi-Fi HaLow, based on the IEEE 802.11ah standard. The protocol significantly boosts range by using lower-frequency radio signals that propagate further than conventional Wi-Fi frequencies. It is also low power, and is geared toward providing connectivity for Internet of Things (IoT) applications. To demonstrate the technology’s potential, Morse Micro recently conducted a test on the seafront in San Francisco’s Ocean Beach neighborhood. They showed that two tablets connected over a HaLow network could communicate at distances of up to 3 km while maintaining speeds around 1 megabit per second—enough to support a slightly grainy video call. ... “It is pretty unprecedented range,” says Prakash Guda, vice president of marketing and product management at Morse Micro. “And it’s not just the ability to send pings but actual megabits of data.” The HaLow protocol works in much the same way as conventional Wi-Fi, says Guda, apart from the fact that it operates in the 900-megahertz frequency band rather than the 2.4-gigahertz band. 


How to Make the Most of In-House Software Development

Maintaining an in-house software development team can be tough. You must hire skilled developers – which is no easy feat in today’s economy, where talented programmers remain in short supply – and then manage them on an ongoing basis. You must also ensure that your development team is nimble enough to respond to changing business needs and that it can adapt as your technology stack evolves. Given these challenges, it’s no surprise that most organizations now outsource application development instead of relying on in-house teams. But I’m here to tell you that just because in-house development can be hard doesn’t mean that outsourcing is always the best approach. On the contrary, IT organizations that choose to invest in in-house development for some or all the work can realize lower overall costs and a competitive advantage by creating domain-specific expertise. Keeping development in-house can help organizations address unique security requirements and maintain full control over the development lifecycle and roadmaps. For businesses with specialized technology, security and operational needs, in-house development is often the best strategy.



Quote for the day:

“The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself.” -- Mark Caine

Daily Tech Digest - February 02, 2024

CISO accountability in the era of software supply chain security

A CISO now needs to start acting like a CFO on their very first day in the role. CISOs no longer have the freedom to prioritize business interests and subordinate cybersecurity, because they will be found liable for misrepresenting security practices in the event of a cyber-incident. CFOs can’t let some fraud, financial crime, absence of key stated controls, or insider dealing go while they ease into the role, and CISOs will need to start acting the same way regarding their company’s security program. While some may find this new era of CISO accountability a threat, they need to look at the massive opportunity as well — and the opportunity is quite big! Yes, CISOs will have more work to do with this new level of scrutiny and accountability. However, this new era will allow them to take a more senior and influential role in the organization, receive greater allocations of resources to maintain an appropriate level of perceived risk, prioritize critical enterprise security needs, and be fully transparent on what security issues their company is dealing with. And because CISOs and their respective companies will be more transparent and accountable, this should lead to greater trust in them from customers, board members, investors, employees, regulators, and the communities in which they operate.


From Chaos to Control: Nurturing a Culture of Data Governance

Data architecture encompasses the design, structure, and organization of data assets. It involves defining the blueprint for how data is collected, stored, processed, accessed, and managed throughout its lifecycle. Data architecture sets the foundation for data governance by establishing standards, principles, and guidelines for data management. It encompasses aspects such as data models, data flow diagrams, database design, and the integration of data across different systems. Effective data architecture is crucial for ensuring data consistency, integrity, and accessibility, aligning data assets with the organization's goals and objectives. Data modeling is a specific aspect of data architecture that involves creating visual representations (models) of the data and its relationships within an organization. This process helps in understanding and documenting the structure of data entities, attributes, and their interactions. Data modeling plays a vital role in data governance by providing a standardized way to communicate and document data requirements, ensuring a collective understanding among stakeholders. 


Cloud migration is still a pain

The cloud providers sold the cloud as something that needed to be leveraged ASAP, so massive workloads and data sets were lifted and shifted to this new “miracle platform.” Three things occurred: First, it was more expensive than we thought. I use the unproven number of the cloud costing 2.5 times what enterprises believed it would cost to operate workloads and data sets in the cloud. This all blew up in 2022, when we also had the accommodation of workloads moved during the pandemic, many with unimproved applications and data sets. Second, poorly designed, developed, and deployed applications moved from enterprise data centers to the cloud, where applications still need to be better designed, developed, and deployed. We’re paying more for them to run in the cloud since we’re paying for the existing inefficiencies. ... Finally, enterprises aren’t learning from their mistakes. I’ve often been taken aback by the amount of lousy cloud reality that most enterprises accept. Although some have moved back to enterprise data centers, some are indeed funding application and data optimization. We’re still getting a C- in returning value to the business, our shared objective.


The Growing Demand for Infrastructure Resiliency—How Digital Transformation Can Help

According to Bademosi, “”Integrating digital technologies is not just a trend, it is the next frontier in creating sustainable, resilient, and advanced infrastructure systems. As we look to the future, it is evident that digital technology will be at the heart of every innovation” The benefits of harnessing new technologies and transforming infrastructure seem limitless. But government agencies and industry partners may not know where to start. According to Bademosi, it begins by gauging the current state of critical infrastructure systems and what is needed for the future. What are the strengths, weaknesses, and potential opportunities available for infrastructure? Next, it’s important to foster collaborations across government agencies, industry leaders, and the communities that will be impacted by the proposed project. Industry partners and government agencies then need to empower their workforce with the training they need to deploy these technologies on future projects. Once training is complete, they can begin to experiment with these new technologies on smaller pilot projects, using them as workshops to test strategies.


Falling into the Star Gate of Hidden Microservices Costs

We’re not going to argue that monoliths are perfect. But an intentionally designed monolith has a comprehensible solution to each flaw, and unlike a microservices architecture, each one you resolve creates a feedback loop of improvement with internal scope. To improve your monolith in some dimension — performance scaling, the ease of onboarding for new developers, the sheer quality of your code — you need to invest in the application itself, not abstract the problem to a third party or accept a higher cloud computing bill, hoping that scale will solve your problems. Of their experience, the Amazon Prime Video team wrote, “Moving our service to a monolith reduced our infrastructure cost by over 90%. It also increased our scaling capabilities. … The changes we’ve made allow Prime Video to monitor all streams viewed by our customers and not just the ones with the highest number of viewers. This approach results in even higher quality and an even better customer experience.” Since the Amazon Prime Video engineering team published their blog post, many have argued about whether their move is a major win for monoliths, the same-old microservices architecture with new branding or a semantic misinterpretation of what a “service” is. 


The importance of IoT visibility in OT environments

The surge of sensory data volume and network traffic generated by IIoT devices can overwhelm existing network infrastructure. Outdated hardware and bandwidth constraints can severely cripple the efficient operation of these interconnected systems. Scaling up and modernizing infrastructure becomes imperative in paving the way for a flourishing IIoT ecosystem. ... In the intricate game of cyber defense, network visibility reigns supreme. The map and compass guide defenders through the ever-shifting digital landscape, illuminating the hidden pathways where threats dwell. Organizations navigate murky waters without it, blind to threat actors weaving through their systems. Network visibility emerges as the antidote, empowering defenders with a four-pronged shield: early threat detection, where anomalies transform into bright beacons revealing potential attacks before they escalate. Secondly, it facilitates swift incident response, allowing isolation and mitigation of the affected area like quarantining a digital contagion; proactive threat hunting, where defenders actively scour network data for lurking adversaries and hidden vulnerabilities, pre-empting attacks before they materialize. 


Embrace Change: Navigating Digital Transformation for Sustainable Success

Staying within the confines of one’s comfort zone for an extended period is ill-advised, especially in the face of disruptive innovations. The world has little patience for those who cling to past glories and turn a blind eye to emerging technologies. Historical examples, such as the decline of the Roman Empire, serve as stark reminders of the perils of stagnation and resistance to change. In a world that is in a constant state of flux, the choice to adapt or face extinction rests squarely on the shoulders of individuals and organizations. The significance of speed as a competitive edge cannot be overstated. Just as the velocity of an aircraft enables it to soar through the skies and the dynamic force of a fast-moving car propels it forward, adapting to the rapid pace of change is imperative for survival in the business realm. Embracing change willingly is not merely a suggestion; it is a strategic imperative. In a world characterized by constant evolution, the notion of being “too big to fail” is a myth. The decision to adapt is not dictated by external forces; it is entirely within the control of individuals and organizations.


“All About the Basics”: Cyber Hygiene in the Digital Age

In this world, the digital equivalent of leaving your front door unlocked and the windows wide open is a reality. The result? Well, it’s not pretty.First, there’s the risk of data breaches. These aren’t just inconveniences; they’re full-blown catastrophes. When we’re lax with updates and passwords, we’re essentially rolling out the red carpet for cybercriminals. They waltz in, pilfer sensitive data, and leave chaos in their wake. The fallout? Compromised personal information, financial loss, and let’s not forget the ever-lasting damage to our reputation. It’s the kind of nightmare that keeps grumpy CISOs up at night. Then there are the phishing attacks. Without proper awareness and training, our well-meaning but sometimes naïve users might unwittingly invite trouble right into our digital living room. It starts with an innocent click on what seems like a legitimate email. And before you know it, malware has spread through your systems like wildfire. The result? System downtimes, productivity loss, and a frantic race against time to contain the breach. And let’s not even get started on unsecured devices; it’s like leaving your secret plans in a cafe, waiting for the first curious bystander to pick them up. 


Navigating the New Era with Generative AI Literacy

As technology has evolved, that focus on data literacy has quickly transitioned into a focus on generative AI literacy -- a new breed of data literacy built on the core tenet of data literacy: data collection and curation, data visualization, and interpretation. With the advent of generative AI tools from industry leaders such as OpenAI, Google, Microsoft, and Anthropic, companies need their employees to know how to leverage these tools to create business value. Ultimately, data literacy and generative AI literacy have the same goals -- to drive effective business decision-making and to create organizational value. ... Generative AI’s power is due in part to its ability to accept such a wide array of inputs and prompts, but this also requires that employees learn to expand their thinking. As repetitive tasks are automated away, employees will be free to think more innovatively, which is not always intuitive for them. Educational institutions have focused on teaching students to learn facts for many years but are now being required to teach students how to think in terms of problem sets, alternative approaches, and innovative solution discovery. 


Risk Management is Never Having to Say, ‘I Am sorry’

Enterprise architecture is largely an exercise in risk management. Unless architecture organizations are willing to take on risk, they are unlikely to be perceived as influential partners in solving problems. Rory established his team as a solver of gnarly problems, not complaining bystanders, by accepting accountability to deliver the mobile commerce platform. One of the biggest categories of risk that architecture leaders must manage is relational risk, i.e. navigating the executive sociology. It wasn’t easy, but between Rory and Loretta the architecture department was able to achieve a key accomplishment, increasing the value of the company’s search and mobile toolkit assets, by establishing empathy with powerful business partners and creating a win / win solution to an urgent business problem. Architects can use what I call “organizational jujitsu” to gain support from agile teams by positioning high quality architecture assets as accelerators of agility. That is, if the architecture department can make the use of existing assets and contracts the fastest route to working tested product frequently delivered to customers, it can leverage the 



Quote for the day:

"Your greatest area of leadership often comes out of your greatest area of pain and weakness." -- Wayde Goodall

Daily Tech Digest - February 01, 2024

Making the Leap From Data Governance to AI Governance

One of the AI governance challenges Regensburger is researching revolves around ensuring the veracity of outcomes, of the content that’s generated by GenAI. “It’s sort of the unknown question right now,” he says. “There’s a liability question on how you use…AI as a decision support tool. We’re seeing it in some regulations like the AI Act and President Biden’s proposed AI Bill Rights, where outcomes become really important, and it moves that into the governance sphere.” LLMs have the tendency to make things up out of whole cloth, which poses a risk to anyone who uses it. For instance, Regensburger recently asked an LLM to generate an abstract on a topic he researched in graduate school. “My background is in high energy physics,” he says. “The text it generated seemed perfectly reasonable, and it generated a series of citations. So I just decided to look at the citations. It’s been a while since I’ve been in graduate school. Maybe something had come up since then? “And the citations were completely fictitious,” he continues. “Completely. They look perfectly reasonable. They had Physics Review Letters. It had all the right formats. And at your first casual inspection it looked reasonable. 


Architecting for Industrial IoT Workloads: A Blueprint

The first step in an IIoT-enabled environment is to establish communication interfaces with the machinery. In this step, there are two primary goals: read data from machines (telemetry) and write data to machines Machines in a manufacturing plant can have legacy/proprietary communication interfaces and modern IoT sensors. Most industrial machines today are operated by programmable logic controllers (PLC). A PLC is an industrial computer ruggedized and adapted to control manufacturing processes—such as assembly lines, machines, and robotic devices — or any activity requiring high reliability, ease of programming and process fault diagnosis. However, PLCs provide limited connectivity interfaces with the external world over protocols like HTTP and MQTT, restricting external data reads (for telemetry) and writes (for control and automation). Apache PLC4X bridges this gap by providing a set of API abstractions over legacy and proprietary PLC protocols. PLC4X is an open-source universal protocol adapter for IIoT appliances that enables communication over protocols including, but not limited to, Siemens S7, Modbus, Allen Bradley, Beckhoff ADS, OPC-UA, Emerson, Profinet, BACnet and Ethernet.


6 user experience mistakes made for security and how to fix them

The challenge here is to communicate effectively with your non-experts in a way that they understand the “what” and “why” of cybersecurity. “The goal is to make it practical rather than condescending, manipulative, or punitive,” Sunshine says. “You need to take down that fear factor.” So long as people have the assurance that they can come clean and not be fired for that kind of mistake, they can help strengthen security by coming forward about problems instead of trying to cover them up. ... To achieve optimal results, you have to strike the right balance between the level of security required and the convenience of users. Much depends on the context. The bar is much higher for those who work with government entities, for example, than a food truck business, Sunshine says. Putting all the safeguards required for the most regulated industries into effect for businesses that don’t require that level of security introduces unnecessary friction. Failing to differentiate among different users and needs is the fundamental flaw of many security protocols that require everyone to use every security measure for everything.


5 New Ways Cyberthreats Target Your Bank Account

Deepfake technology, initially designed for entertainment, has evolved into a potent tool for cybercriminals. Through artificial intelligence and machine learning, these technologies fuel intricate social engineering attacks, enabling attackers to mimic trusted individuals with astonishing precision. This proficiency grants them access to critical data like banking credentials, resulting in significant financial repercussions. ... Modern phishing tactics now harness artificial intelligence to meticulously analyse extensive data pools, encompassing social media activities and corporate communications. This in-depth analysis enables the creation of highly personalised and contextually relevant messages, mimicking trusted sources like banks or financial institutions. This heightened level of customisation significantly enhances the credibility of these communications, amplifying the risk of recipients disclosing sensitive information, engaging with malicious links, or unwittingly authorising fraudulent transactions. ... Credential stuffing is a prevalent and dangerous method cybercriminals use to breach bank accounts. This attack method exploits the widespread practice of password reuse across multiple sites and services.


Italian Businesses Hit by Weaponized USBs Spreading Cryptojacking Malware

A financially motivated threat actor known as UNC4990 is leveraging weaponized USB devices as an initial infection vector to target organizations in Italy. Google-owned Mandiant said the attacks single out multiple industries, including health, transportation, construction, and logistics. "UNC4990 operations generally involve widespread USB infection followed by the deployment of the EMPTYSPACE downloader," the company said in a Tuesday report. "During these operations, the cluster relies on third-party websites such as GitHub, Vimeo, and Ars Technica to host encoded additional stages, which it downloads and decodes via PowerShell early in the execution chain." ... Details of the campaign were previously documented by Fortgale and Yoroi in early December 2023, with the former tracking the adversary under the name Nebula Broker. The infection begins when a victim double-clicks on a malicious LNK shortcut file on a removable USB device, leading to the execution of a PowerShell script that's responsible for downloading EMPTYSPACE (aka BrokerLoader or Vetta Loader) from a remote server via another intermedia PowerShell script hosted on Vimeo.


Understanding Architectures for Multi-Region Data Residency

A critical principle in the context of multi-region deployments is establishing clarity on truth and trust. While knowing the source of truth for a piece of data is universally important, it becomes especially crucial in multi-region scenarios. Begin by identifying a fundamental unit, an "atom," within which all related data resides in one region. This could be an organizational entity like a company, a team, or an organization, depending on your business structure. Any operation that involves crossing these atomic boundaries inherently becomes a cross-region scenario. Therefore, defining this atomic unit is essential in determining the source of truth for your multi-region deployment. In terms of trust, as different regions hold distinct data, communication between them becomes necessary. This could involve scenarios like sharing authentication tokens across regions. The level of trust between regions is a decision rooted in the specific needs and context of your business. Consider the geopolitical landscape if governments are involved, especially if cells are placed in regions with potentially conflicting interests.


Developing a Data Literacy Program for Your Organization

Before developing a data literacy program for an organization, it is crucial to conduct a comprehensive training needs assessment. This assessment helps in understanding the current level of data literacy within the organization and identifying areas that require improvement. It involves gathering information about employees’ existing knowledge, skills, and attitudes toward data analysis and interpretation. To conduct the needs assessment, different methods can be employed. Surveys, interviews, focus groups, or even analyzing existing data can provide valuable insights into employees’ proficiency levels and their specific learning needs. By involving various stakeholders, such as managers, department heads, and employees themselves, in this process, a holistic understanding of the organization’s requirements can be achieved. ... It is also beneficial to compare the program’s outcomes against predefined benchmarks or industry standards. This allows organizations to benchmark their progress against other similar initiatives and identify areas where further improvements are necessary. Overall, continuously evaluating the effectiveness of a data literacy program helps organizations understand its impact on individuals’ capabilities and organizational performance.


Women In Architecture: Early Insights and Reflections

The question of why there so few women in architecture is a key one in our minds. Rather than dwelling on the negative, the conversations focus on identifying the root causes to help us move into action effectively. I have learned that the answer to this question is incredibly nuanced and layered, with many interrelated factors. Some root causes for fewer women in architecture draw from the macro level context, including a similar set of challenges experienced by women in technology. However, one of the biggest contributors is the architecture profession itself and how it is presented. This has been a hard truth that has asserted itself as a common thread throughout the conversations. For example, the lack of clarity regarding the role and value proposition of architecture, often perceived as abstract, technical, and unattainable, poses a substantial barrier. ... However, there is a powerful correspondence between the momentum for more diversity in architecture and exactly what the profession needs most now. For architects of the future to thrive, it’s not enough to excel at cognitive, architectural, and technical competencies, but just as important to master the human competencies such as communication, influence, leadership, and emotional intelligence.


New York Times Versus Microsoft: The Legal Status of Your AI Training Set

One of the problems the tech industry has had from the start is product contamination using intellectual property from a competitor. The tech industry is not alone, and the problem of one company illicitly acquiring the intellectual property of another and then getting caught goes back decades. If an engineer uses generative AI that has a training set contaminated by a competitor’s intellectual property, there is a decent chance, should that competitor find out, that the resulting product will be found as infringing and be blocked from sale -- with the company that had made use of that AI potentially facing severe fines and sanctions, depending on the court’s ruling. ... Ensuring any AI solution from any vendor contains indemnification for the use of their training set or is constrained to only use data sets that have been vetted as fully under your or your vendor’s legal control should be a primary requirement for use. (Be aware that if you provide AI capabilities to others, you will find an increasing number of customers will demand indemnification.) You’ll need to ensure that the indemnification is adequate to your needs and that the data sets won’t compromise your products or services under development or in market so your revenue stream isn’t put at risk.


How to calculate TCO for enterprise software

It’s obvious that hardware, once it has reached end-of-life, needs to be disposed of properly. With software, there are costs as well, primarily associated with data export. First, data needs to be migrated from the old software to the new, which can be complex given all the dependencies and database calls that might be required for even a single business process. Then there’s backups and disaster recovery. The new software might require that data to be formatted in a different way. And you still might need to keep archived copies of certain data stores from the old system for regulatory or compliance reasons. Another wrinkle in the TCO calculation is estimating how long you plan to use the software. Are you an organization that doesn’t change tech stacks if it doesn’t have to and therefore will probably run the software for as long as it still does the job? In that case, it might make sense to do a five-year TCO analysis as well as a 10-year version. On the other hand, what if your company has an aggressive sustainability strategy that calls for eliminating all of its data centers within three years, and moving as many apps as possible to SaaS alternatives. 



Quote for the day:

"One advantage of talking to yourself is that you know at least somebody's listening." -- Franklin P. Jones

Daily Tech Digest - January 31, 2024

Rethinking Testing in Production

With products becoming more interconnected, trying to accurately replicate third-party APIs and integrations outside of production is close to impossible. Trunk-based development, with its focus on continuous integration and delivery, acknowledges the need for a paradigm shift. Feature flags emerge as the proverbial Archimedes lever in this transformation, offering a flexible and controlled approach to testing in production. Developers can now gradually roll out features without disrupting the entire user base, mitigating the risks associated with traditional testing methodologies. Feature flags empower developers to enable a feature in production for themselves during the development phase, allowing them to refine and perfect it before exposing it to broader testing audiences. This progressive approach ensures that potential issues are identified and addressed early in the development process. As the feature matures, it can be selectively enabled for testing teams, engineering groups or specific user segments, facilitating thorough validation at each step. The logistic nightmare of maintaining identical environments is alleviated, as testing in production becomes an integral part of the development workflow.


Enterprise Architecture in the Financial Realm

Enterprise architecture emerges as the North Star guiding banks through these changes. Its role transcends being a mere operational construct; it becomes a strategic enabler that harmonizes business and technology components. A well-crafted enterprise architecture lays the foundation for adaptability and resilience in the face of digital transformation. Enterprise architecture manifests two key characteristics: unity and agility. The unity aspect inherently provides an enterprise-level perspective, where business and IT methodologies seamlessly intertwine, creating a cohesive flow of processes and data. Conversely, agility in enterprise architecture construction involves deconstruction and subsequent reconstruction, refining shared and reusable business components, akin to assembling Lego bricks. ... Quantifying the success of digital adaptation is crucial. Metrics should not solely focus on financial outcomes but also on key performance indicators reflecting the effectiveness of digital initiatives, customer satisfaction, and the agility of operational models.


Cloud Security: Stay One Step Ahead of the Attackers

The relatively easy availability of cloud-based storage can lead to a data sprawl that is uncontrolled and unmanageable. In many cases, data which must be deleted or secured is left ungoverned, as organizations are not aware of their existence. In April 2022, cloud data security firm, Cyera, found unmanaged data store copies, and snapshots or log data. The researchers from this firm found out that 60% of the data security issues present in cloud data stores were due to unsecured sensitive data. The researchers further observed that over 30% of scanned cloud data stores were ghost data, and more than 58% of these ghost data stores contained sensitive or very sensitive data. ... Despite best practices advised by cloud service providers, data breaches that originate in the cloud have only increased. IBM’s annual Cost of a Data Breach report for example, highlights that 45% of studied breaches have occurred in the cloud. What is also noteworthy is that a significant 43% of reporting organizations which have stated they are just in the early stages or have not started implementing security practices to protect their cloud environments, have observed higher breach costs.


Five Questions That Determine Where AI Fits In Your Digital Transformation Strategy

Once you understand the why and the what, only then can you consider how your organization can use insights from AI to better accomplish its goals. How will your people respond, and how will they benefit? Today’s organizations have multiple technology partners, and they may have many that are all saying they can do AI. But how will your organization work with all those partners to make an AI solution come together? Many organizations are developing AI policies to define how it can be used. Having these guardrails ensures that your organization is operating ethically, morally and legally when it comes to the use of AI. ... It’s important to consider whether your organization is truly ready for AI at an enterprise or divisional level before deciding to implement AI at scale. Pilot projects can help you determine whether the implementation is generating the intended results and better understand how end users will interact with the processes. If you can't achieve customization and personalization across the organization, AI initiatives will be much tougher to implement.


A Dive into the Detail of the Financial Data Transparency Act’s Data Standards Requirements

The act is a major undertaking for regulators and regulated firms. It is also an opportunity for the LEI, if selected, to move to another level in the US, which has been slow to adopt the identifier, and significantly increase numbers that will strengthen the Global LEI System. While industry experts suggest regulators in scope of FDTA, collectively called Financial Stability Oversight Council (FSOC) agencies, initially considered data standards including the LEI and Financial Instrument Global Identifier published by Bloomberg, they suggest the LEI is the best match for the regulation’s requirements for ‘Covered agencies to establish “common identifiers” for information reported to covered regulatory agencies, which could include transactions and financial products/instruments.” ... The selection and implementation of a reporting taxonomy is more challenging as it will require many of the regulators to abandon existing reporting practices often based on PDFs, text and CSV files, and replace these with electronic reporting and machine-readable tagging. XBRL fits the bill, say industry experts, although there has been pushback from some agencies that see the unfunded requirement for change as too great a burden.


Data Center Approach to Legacy Modernization: When is the Right Time?

Legacy systems can lead to inefficiencies in your business. If we take one of the parameters mentioned above, such as cooling, one example of inefficiency could lie within an old server that’s no longer of use but still turned on. This could be placing unneccesary strain on your cooling, thus impacting your environmental footprint. Legacy systems may no longer be the most appropriate for your business, as newer technologies emerge that offer a more efficient method of producing the same, or better, results. If you neglect this technology, you might be giving your competitors an advantage which could be costly for your business. ... A cyber-attack takes place every 39 seconds, according to one report. This puts businesses at risk of losing or compromising not only their intellectual property and assets but also their customer’s data. This could put you at risk of damaging your reputation and even facing regulation fines. One of the best reasons to invest in digital transformation is for the security of your business. Systems that no longer receive updates can become a target of cyber-attacks and act as a vulnerability within your technology infrastructure. 


4 paths to sustainable AI

Hosting AI operations at a data center that uses renewable power is a straightforward path to reduce carbon emissions, but it’s not without tradeoffs. Online translation service Deepl runs its AI functions from four co-location facilities: two in Iceland, one in Sweden, and one in Finland. The Icelandic data center uses 100% renewably generated geothermal and hydroelectric power. The cold climate also eliminates 40% or more of the total data center power needed to cool the servers because they open the windows rather than use air conditioners, says Deepl’s director of engineering Guido Simon. Cost is another major benefit, he says, with prices of five cents per KW/hour compared to about 30 cents or more in Germany. The network latency between the user and a sustainable data center can be an issue for time-sensitive applications, says Stent, but only in the inference stage, where the application provides answers to the user, rather than the preliminary training phase. Deepl, with headquarters in Cologne, Germany, found it could run both training and inference from its remote co-location facilities. “We’re looking at roughly 20 milliseconds more latency compared to a data center closer to us,” says Simon.


Can ChatGPT drive my car? The case for LLMs in autonomy

Autonomous driving is an especially challenging problem because certain edge cases require complex, human-like reasoning that goes far beyond legacy algorithms and models. LLMs have shown promise in going beyond pure correlations to demonstrating a real “understanding of the world.” This new level of understanding extends to the driving task, enabling planners to navigate complex scenarios with safe and natural maneuvers without requiring explicit training. ... Safety-critical driving decisions must be made in less than one second. The latest LLMs running in data centers can take 10 seconds or more. One solution to this problem is hybrid-cloud architectures that supplement in-car compute with data center processing. Another is purpose-built LLMs that compress large models into form factors small enough and fast enough to fit in the car. Already we are seeing dramatic improvements in optimizing large models. Mistral 7B and Llama 2 7B have demonstrated performance rivaling GPT-3.5 with an order of magnitude fewer parameters (7 billion vs. 175 billion). Moore’s Law and continued optimizations should rapidly shift more of these models to the edge.


The Race to AI Implementation: 2024 and Beyond

The biggest problem is that the competitive and product landscape will be undergoing massive flux, so picking a strategic solution will be increasingly difficult. Younger companies that are less likely to be able to handle the speed of these advancements should focus on openness so that if they fail, someone else can pick up support, interoperability, and compatibility. If you aren’t locked into a single vendor’s solution and can mix and match as needed, you can move on or off a platform based on your needs. Like any new technology, take advice about hardware selection from the platform supplier. This means that if you are using ChatGPT, you want to ask OpenAI for advice about new hardware. If you are working with Microsoft or Google or any other AI developer, ask them what hardware they would recommend. ... You need a vendor that embraces all the client platforms for hybrid AI and one with a diverse, targeted solution set that individually focuses on the markets your firm is in. Right now, only Lenovo seems to have all the parts necessary thanks to its acquisition of Motorola.



Quote for the day:

"It's fine to celebrate success but it is more important to heed the lessons of failure." -- Bill Gates