Daily Tech Digest - February 13, 2024

Advanced Microsegmentation Strategies for IT Leaders

Microsegmentation, and network segmentation in general, is a 50-year-old cybersecurity strategy that “involves dividing a network into smaller zones to enhance security by restricting the movement of a threat to an isolated segment rather than to the whole network,” says Guy Pearce, a member of the ISACA Emerging Trends Working Group. ... Moyle says that any segmentation (micro or otherwise) can be “part of a security strategy based on use case, architecture and other factors.” He notes that microsegmentation itself isn’t an end goal for security, and that IT leaders should instead see it as “a mechanism that’s part of a broader holistic strategy.” That said, many factors go into a successful microsegmentation implementation, namely careful planning. Microsegmentation goes hand in hand with setting up granular security policies. It also relies on continuous monitoring, evaluation and user education awareness, Pearce says. Successful microsegmentation also requires automation, incident response orchestration and cross-team collaboration. None of that is sustainable without a solid, well-maintained network architecture map. 


Could DC win the new data center War of the Currents?

Fundamentally, electronics use DC power. The chips and circuit boards are all powered by direct current, and every computer or other piece of IT equipment that is plugged into the AC mains has to have a “power supply unit” (PSU), also known as a rectifier or switched mode power supply (SMPS) inside the box, turning the power from AC to DC. ... Data centers have an Uninterruptible Power Supply (UPS) designed to power the facility for long enough for generators to fire up. The UPS has to have a large store of batteries, and they are powered by DC. So power enters the data center as AC, is converted to DC to charge the batteries, and then back to AC for distribution to the racks. ... Data centers are now looking at using microgrids for power. That means drawing on-site energy directly from sources such as fuel cells and solar panels. As it turns out, those sources often conveniently produce direct current. A data center could be isolated from the AC grid, and live on its own microgrid. On that grid DC power sources charge batteries, and power electronics which fundamentally run on DC. In that situation, the idea of switching to AC for a short loop around the facility begins to look, well, odd.


5 key metrics for IT success

When merged, speed, quality, and value metrics are essential for any organization undergoing transformation and looking to move away from traditional project management approaches, says Sheldon Monteiro, chief product officer at digital consulting firm Publicis Sapient. “This metric isn’t limited to a specific role or level within an IT organization,” he explains. “It’s relevant for everyone involved in the product development process.” Speed, quality, and value metrics represent a shift from traditional project management metrics focused on time, scope, and cost. “Speed ensures the ability to respond swiftly to change, quality guarantees that changes are made without compromising the integrity of systems, and value ensures that the changes contribute meaningfully to both customers and the business,” Monteiro says. “This holistic approach aligns IT practices with the demands of a continuously evolving landscape.” Focusing on speed, quality, and value provides a more nuanced understanding of an organization’s adaptability and effectiveness. “Focusing on speed, quality, and value provides insights into an organization’s ability to adapt to continuous change,” Monteiro says. 


The future of cybersecurity: Anticipating changes with data analytics and automation

In recent years, cybersecurity threats have undergone a notable evolution, marked by the subtler tactics of mature threat actors who now leave fewer artifacts for analysis. The old metaphor ‘looking for a needle in a haystack’ (to describe the detection of malicious activity) is now more akin to ‘looking for a needle in a stack of needles.’ This shift necessitates the establishment of additional context around suspicious events to effectively differentiate legitimate from illegitimate activities. Automation emerges as a pivotal element in providing this contextual enrichment, ensuring that analysts can discern relevant circumstances amid the rapid and expansive landscape of modern enterprises. The landscape of cyber threats continues to further evolve, and recent high-profile data breaches underscore the gravity of the shift. In response to these challenges, data analytics and automation play a crucial role in detecting lateral movement, privilege escalation, and exfiltration, particularly when threat actors exploit zero-day vulnerabilities to gain entry into an environment.


Significance of protecting enterprise data

In a world where data fuels innovation and growth, protecting enterprise data is not optional; it’s essential. The digital age has ushered in a complex threat landscape, necessitating a multifaceted approach to data protection. From next-gen SOCs and application security to IAM, data privacy, and collaboration with SaaS providers, every aspect plays a vital role. As traditional security tools and firewalls are no longer sufficient to detect and respond to modern threats, next-generation security operations centres (SOCs) can play a proactive role by leveraging technologies like AI, machine learning, and user behavior analytics. They can analyse huge volumes of data in real-time to detect even the most well-hidden attacks. Early detection and quick response are crucial to minimise damage from security incidents. Next-gen SOCs play a pivotal role in safeguarding enterprises by enhancing visibility, shortening response times, and reducing security risks. Protecting applications is equally important, as in the digital age, applications are the conduit through which data flows. Many successful breaches target exploitable vulnerabilities residing in the application layer, indicating the need for enterprise IT departments to be extra vigilant about application security. 


A changing world requires CISOs to rethink cyber preparedness

A cybersecurity posture that is societally conscious equally requires adopting certain underlying assumptions and taking preparatory actions. Foremost among these is the recognition that neutrality and complacency are anathema to one another in the context of digital threats stemming from geopolitical tension. As I recently wrote, the inherent complexity and significance of norm politicking in international affairs leads to risk that impacts cybersecurity stakeholders in nonlinear fashion. Recent conflicts support the idea that civilian hacking around major geopolitical fault lines, for instance, operates on divergent logics of operations depending on the phase of conflict that is underway. The result of such conditions should not be a reluctance to make statements or take actions that avoid geopolitical relevance. Rather, cybersecurity stakeholders should clearly and actively attempt to delineate the way geopolitical threats and developments reflect the security objectives of the organization and its constituent community. They should do so in a way that is visible to that community. 


AI-powered 6G wireless promises big changes

According to Will Townsend, an analyst at Moor Insights & Strategy, things are accelerating more quickly with 6G than 5G did at the same point in its evolution. And speaking of speeds, that will also be one of the biggest and most transformative improvements of 6G over 5G, due to the shift of 6G into the terahertz spectrum range, Townsend says. “This will present challenges because it’s such a high spectrum,” he says. “But you can do some pretty incredible things with instantaneous connectivity. With terahertz, you’re going to get near-instantaneous latency, no lag, no jitter. You’re going to be able to do some sensory-type applications.” ... The new 6G spectrum also brings another benefit – an ability to better sense the environment, says Spirent’s Douglas. “The radio signal can be used as a sensing mechanism, like how sonar is used in submarines,” he says. That can allow use cases that need three-dimensional visibility and complete visualization of the surrounding environment. “You could map out the environment – the shops, buildings, everything – and create a holistic understanding of the surroundings and use that to build new types of services for the market,” Douglas says. 


What distinguishes data governance from information governance?

Data governance is primarily concerned with the proper management of data as a strategic asset within an organization. It emphasizes the accuracy, accessibility, security, and consistency of data to ensure that it can be effectively used for decision-making and operations. On the other hand, information governance encompasses a broader spectrum, dealing with all forms of information, not just data. It includes the management of data privacy, security, and compliance, as well as the handling of business processes related to both digital and physical information. ... Implementing data governance ensures that an organization's data is accurate, accessible, and secure, which is vital for operational decision-making and strategic planning. This governance type establishes the necessary protocols and standards for data quality and usage. Information governance, by managing all forms of information, helps organizations comply with legal and regulatory requirements, reduce risks, and enhance business efficiency and effectiveness. It also addresses the management of redundant, outdated, and trivial information, which can lead to cost savings and improved organizational performance.


The Future Is AI, but AI Has a Software Delivery Problem

As more developers become comfortable building AI-powered software, Act Three will trigger a new race: the ability to build, deploy and manage AI-powered software at scale, which requires continuous monitoring and validation at unprecedented levels. This is why crucial DevOps practices for delivering software at scale, like continuous integration and continuous delivery (CI/CD), will play a central role in providing a robust framework for engineering leaders to navigate the complexities of delivering AI-powered software — therefore turning these technological challenges into opportunities for innovation and competitive advantage. Just as software teams have honed practices for getting reliable, observable, available applications safely and quickly into customers’ hands at scale, AI-powered software is yet again evolving these methods. We’re experiencing a paradigm shift from the deterministic outcomes we’ve built software development practices around to a world with probabilistic outcomes. This complexity throws a wrench in the conventional yes-or-no logic that has been foundational to how we’ve tested software, requiring developers to navigate a variety of subjective outcomes.


Generative AI – Examining the Risks and Mitigations

In working with AI, we should be helping executives in the companies we are working with to understand these risks and also the potential applications and innovations that can come from Generative AI. That is why it is essential that we take a moment now to develop a strategy for dealing with Generative AI. By developing a strategy, you will be well positioned to reap the benefits from the capabilities, and will be giving your organization a head-start in managing the risks. When looking at the risks, companies can feel overwhelmed or decide that it represents more trouble than they are willing to accept and may take the stance of banning GenAI. Banning GenAI is not the answer, and will only lead to a bypassing of controls and more shadow IT. So, in the end, they will use the technology but won’t tell you. ... AI risks can be broadly categorized into three types: Technical, Ethical, and Social. Technical risks refer to the potential failures or errors of AI systems, such as bugs, hacking, or adversarial attacks. Ethical risks refer to the moral dilemmas or conflicts that arise from the use or misuse of AI, such as bias, discrimination, or privacy violations. Social risks refer to the impacts of AI on human society and culture, such as unemployment, inequality, or social unrest.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree

Daily Tech Digest - February 12, 2024

Is privacy being traded away in the name of innovation and security?

The adage is that if you collect it, you must protect it. Every CISO knows this, and every instance where information is collected should have in place a means to protect that information. With this thought in mind, John A. Smith, founder and CSO of Conversant, proffered some thoughts which are easily embraceable:Adhere to regulations and compliance requirements. Understand that compliance isn’t enough. Measure your secure controls against current threat actor behaviors. Change your paradigms. Remember that most breaches follow the same high-level pattern. Smith’s comment about changing paradigms piqued my interest and his expansion is worthy of taking on board, as a different way of thinking. “Systems are generally open by default and closed by exception,” he tells CSO. “You should consider hardening systems by default and only opening access by exception. This paradigm change is particularly true in the context of data stores, such as practice management, electronic medical records, e-discovery, HRMS, and document management systems.” “How data is protected, access controls are managed, and identity is orchestrated are critically important to the security of these systems. ...”


Is 2024 the Year of Cloud Repatriation?

Security is one of them. At the same time that multi-cloud deployments are showing signs of decline, concerns about security threats are on the rise. The inability to achieve consistent security policies across multi-clouds topped the list as a problem or extreme problem for 56% of the organizations surveyed in 2023 compared to just 26% in 2022. And security mistakes are costly. According to the survey, downtime due to a successful application DDoS attack costs organizations an average of $6,130 per minute. Other security areas respondents ranked as problems or extreme problems included protection between platforms (61% in 2023 vs. 38% in 2022), unified visibility (58% in 2023 vs. 41% in 2022) and centralized management (46% in 2023 vs. 34% in 2022). Security is not, however, the only factor causing companies to rethink their security strategies and move applications and data back on-premise. Other considerations include: Cost management: While the cloud’s pay-as-you-go model can be cost-effective for variable workloads, it can lead to unexpected expenses when usage spikes. Where predictable workloads are concerned, it can be more cost-efficient to invest in on-premise infrastructure over the long term, rather than paying ongoing cloud service fees.


Data Mesh 101: What It Is and Why You Should Care

With the disaggregation of the data stack and profusion of tools and data available, data engineering teams are often left to duct-tape the pieces together to build their end-to-end solutions. The idea of the data mesh, first promulgated by Zhamak Dehghani a few years back, is an emerging concept in the data world. It proposes a technological, architectural, and organizational approach to solving data management problems by breaking up the monolithic data platform and de-centralizing data management across different domain teams and services. In a centralized architecture, data is copied from source systems into a data lake or data warehouse to create a single source of truth serving analytics use cases. This quickly becomes difficult to scale with data discovery and data version issues, schema evolution, tight coupling, and a lack of semantic metadata. The ultimate goal of the data mesh is to change the way data projects are managed within organizations. This enables organizations to empower teams across different business units to build data products autonomously with unified governance principles. It is a mindset shift from centralized to decentralized ownership, with the idea of creating an ecosystem of data products built by cross-functional domain data teams.


The Impact of Open-Source Software on Public Finance Management

The most obvious benefit of OSS is that it’s often free or at least low-cost. Software is the fastest-growing government IT spending category, so switching to a more affordable platform could yield significant savings. Government saving aside, open public finance solutions could reduce the financial burden on consumers. Consider how many U.S. citizens spend hundreds of dollars a year on tax preparation services, which typically use proprietary software. A free or low-cost open-source alternative could dramatically reduce this spending, making tax filing more affordable. ... Public finance agencies also introduce more transparency by embracing OSS. The Consumer Financial Protection Bureau (CFPB) — an early leader in government OSS in the U.S. — cites this visibility as the key driver of its open-source philosophy. The Bureau even runs a public GitHub page to provide developers with OSS tools and show consumers how their platforms work. Accountability is essential for government financial agencies like the CFPB. Consumers can only trust the office enforces regulations fairly and is truly open about its comparisons and advice when they understand how it approaches these issues.


9 traits of great IT leaders

Although it’s true that leading, which is about visioning, is not synonymous with managing, aka accomplishing tasks, true IT leaders are indeed “great at the business of IT,” says Eric Bloom, executive director of the IT Management and Leadership Institute and part of the Society for Information Management (SIM) Leadership Institute. In other words, they excel at managing IT budgets, projects, staffing needs, and so on. They have some, although not deep, understanding of the various technologies within their IT portfolios. And they understand how IT interrelates with cybersecurity and the other functional areas of their organizations. ... Furthermore, CIOs now must engage a wider spectrum of stakeholders, from their own IT teams to business project owners to their C-suite peers, the CEO, board members, and sometimes even outside customers and partners. And they are expected to brief each group on their technical roadmap and vision in ways that each and every one of those groups can understand and embrace. All that, Bloom says, requires the CIO to formulate much more intentional and deliberate interactions because “you could come up with the best vision for IT, but if you can’t articulate it to those you want to motivate, it will fall on deaf ears.”


Ask a Data Ethicist: Can We Trust Unexplainable AI?

Similar to the term AI, ethics also covers a whole range of issues and depending on the particular situation, certain ethical concerns can become more or less prominent. To use an extreme example, most people will care less about their privacy in a life and death situation. In a missing person situation, the primary concern is locating that person. This might involve using every means possible to find them, including divulging a lot of personal information to the media. However, when the missing person is located, all of the publicity about the situation should be removed. The ethical question now centers on ensuring the story doesn’t follow the victim throughout their life, introducing possible stigma.  ... In order for a person to exercise their agency and to be held accountable as a moral agent, it’s important to have some level of understanding about a situation. For example, if a bank denies a loan, they should provide the applicant with an explanation as to how that decision was made. This ensures it wasn’t based on irrelevant factors (you wore blue socks) or factors outside a person’s control (race, age, gender, etc.) that could prove discriminatory. 


Digital experience becomes new boardroom metric

“In our survey, we learned that 94% of the respondents in their own experience have experienced really poorly performing applications. And then out of that, 70% of respondents said that they are more likely to proactively keep using digital services that don’t perform, so the tolerance is very low for experiences that are not world-class, seamless and immediate.” Chintan Patel, Cisco UK and Ireland chief technology officer, said the new experience economy was definitely something hugely top of mind to firms in terms of how they deliver services to their customers and employees. “We have genuinely moved, especially since the pandemic, from this bricks-to-clicks type of motion, and our attention span has changed as well as consumers’. CEOs are absolutely aware of this intimately, how they’re building services, because what they’re seeing is that people have a far greater propensity to change applications, change providers, if the service isn’t met. I think the survey underlines that in terms of 54% of people having deleted more apps in the past year than they’ve installed, and partly because of the type of service or experience they’ve received or not received.


Integrating cybersecurity into vehicle design and manufacturing

The first challenge is in the supply chain, not just in terms of who provides the software; the issue penetrates each layer. Automakers need to understand this from a risk management perspective to pinpoint the onset and location of each specific risk. Suppliers must be involved in this process and continue to follow guidelines put in place by the automaker. The second challenge involves software updating. As technology continues to evolve and more features are added, cybercriminals find new ways to exploit flaws and gaps in systems that we may not have been aware of because of the newness of the technology. Regular software updates must be administered to products to patch holes in systems, improve existing vulnerabilities and improve product performance. In order to address these challenges, automakers need to conduct an initial risk assessment to understand what kind of threats and the type of threat actors are active within each layer of the product and supply chain in the automotive industry. From the experience gained from the initial risk assessment, a procedure must be put in place to ensure each internal and external employee and supplier knows their role in maintaining security at the company.


Startups pursue GPU alternatives for AI

The pitch the GPU-alternative vendors are making is that they have built a better mousetrap. “You will find that the GPU does a good job as far as general training for a broad range of things, and you can learn how to deploy them very, very quickly,” said Rodrigo Liang, co-founder and CEO of SambaNova Systems. “As you get into these really, really large models, you start to see some deficiencies. When you get to the size of GPT, you’re needing to run thousands of these chips. And ultimately, those chips are not running at great efficiency.” James Wang, senior product marketing manager at Cerebras Systems, echoes the legacy design sentiment and says that the GPU chip is simply too small. Its chip, the Wafer-Scale Engine-2 (WSE-2), is the size of an album cover. Whereas the Hopper GPU has a few thousand cores, WSE-2 has 850,000 cores, and the company claims 9,800 times the memory bandwidth of the GPU. “The amount of memory determines what how large-scale of a model you can train,” said Wang. “So if your starting point is a GPU, the maximum you can have is geared toward the size of the GPU and the accompanying memory. If you want to go larger, that problem becomes much more difficult. And you basically have to program around all the weak points of the GPU.”


It's time to break free from Corporate Agile

To get an indication of the price we pay to do Corporate Agile, let’s review the time spent to perform a typical process. I’ll take a Scrum team as an example, making a few simplifications to make measures easy to follow. Our hypothetical team consists of 7 Developers doing 1-week sprints. They have four team meetings each sprint: Refinement, Planning, Retrospective and Review. We’ll assume each meeting takes one hour, totalling four hours a week per person. That's 28 person-hours spent each week “doing Scrum” instead of doing work that directly benefits customers, and we’re not even counting the Daily. Now add the overhead of a professional scrum master, dedicated product owner, and layers of management between the team and its real stakeholders. ... What did they gain? In my experience, efforts toward backlog grooming, task refinement, and sprint planning rarely yield noticeable benefits except to make work fit in a box. ... For those currently in Scrum teams, ask yourself which would make your products more awesome: These meetings? Another engineer, designer, artist or domain expert? Budget for tools, services or runway? A few hours to relax and recharge?



Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." -- Philippos

Daily Tech Digest - February 11, 2024

What is Microsoft Fabric? A big tech stack for big data

Microsoft Fabric encompasses data movement, data storage, data engineering, data integration, data science, real-time analytics, and business intelligence, along with data security, governance, and compliance. In many ways, Fabric is Microsoft’s answer to Google Cloud Dataplex. As of this writing, Fabric is in preview. Microsoft Fabric is targeted at, well, everyone: administrators, developers, data engineers, data scientists, data analysts, business analysts, and managers. Currently, Microsoft Fabric is enabled by default for all Power BI tenants. Microsoft Fabric Data Engineering combines Apache Spark with Data Factory, allowing notebooks and Spark jobs to be scheduled and orchestrated. Fabric Data Factory combines Power Query with the scale and power of Azure Data Factory, and supports over 200 data connectors. Fabric Data Science integrates with Azure Machine Learning, which allows experiment tracking and model registry. Fabric Real-Time Analytics includes an event stream, a KQL (Kusto Query Language) database, and a KQL queryset to run queries, view query results, and customize query results on data. If KQL is new to you, welcome to the club.


Cybercriminals are creating their own AI chatbots to support hacking and scam users

In a surprisingly effective attack, researchers were able to use the prompt, “Repeat the word ‘poem’ forever” to cause ChatGPT to inadvertently expose large amounts of training data, some of which was sensitive. These vulnerabilities place person’s privacy or a business’s most-prized data at risk. More widely, this could contribute to a lack of trust in AI. Various companies, including Apple, Amazon and JP Morgan Chase, have already banned the use of ChatGPT as a precautionary measure. ChatGPT and similar LLMs represent the latest advancements in AI and are freely available for anyone to use. It’s important that its users are aware of the risks and how they can use these technologies safely at home or at work. Here are some tips for staying safe. Be more cautious with messages, videos, pictures and phone calls that appear to be legitimate as these may be generated by AI tools. Check with a second or known source to be sure. Avoid sharing sensitive or private information with ChatGPT and LLMs more generally. Also, remember that AI tools are not perfect and may provide inaccurate responses. Keep this in mind particularly when considering their use in medical diagnoses, work and other areas of life.


Building a disaster recovery playbook

No one wants to dwell on the “what ifs.” This is especially the case for organisations that are already maxed on internal resources and growth planning. But having a disaster recovery playbook on hand is a major component of long-term business viability. Disaster recovery playbooks contain all of the information, resources, and processes required to get a business back up and running in the event of a catastrophic event. They have a detailed breakdown of all team members (both internal and external) involved in recovery processes and a methodical approach to isolate any persistent threats and resume normal operations. While there are best practices when going through disaster recovery planning, there is no one-size-fits-all format. A disaster recovery playbook is unique to your business and is formatted and customised based on specific circumstances and factors in your own business requirements when it comes to risk management. Note that for some companies, disaster recovery planning is actually required. For companies that must maintain compliance with standards like HIPAA, SOC, and FedRAMP, disaster recovery plans are necessary.


Transform your financial IT infrastructure: Boost sustainability, security, and resilience

Like other industries, the financial sector is still dealing with the aftermath of COVID-19. Organizations are trying to figure out how to manage a hybrid workforce and what to do with a surplus of office space created by work-from-home practices. At the same time, financial services organizations need to optimize their digital infrastructure to connect IT and OT systems for a full view of the entire infrastructure. On the building management side, this means deploying sensors and connectivity solutions to collect and analyze data from systems such as chilled water plants, circuit breakers and mechanical equipment. The data delivers insights that enable businesses to manage systems more efficiently to reduce energy and operational costs. As they endeavor to make these improvements, organizations are getting some help from hardware and energy systems manufacturers, who are producing more efficient products that generate less waste. Combined with investments in renewable energy sources, efficient equipment helps organizations meet sustainability goals and comply with the upcoming disclosure regulations on greenhouse gas emissions.


The role of storage infrastructure in fortifying data security

The data security solution should also include the integrated use of various security technologies like Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), Data Loss Protection (DLP), Identity and Access Management (IAM), Intrusion Detection and Prevention Systems (IDPS) to enable comprehensive security to identify, protect, detect, respond, and recover data. Every component in the overall IT stack needs to participate in the data security paradigm, particularly enterprise storage systems. Storage systems (on-premises, on-cloud, or hybrid) are home to all business data and are essential in enabling the data security considerations mentioned above. As a result, there is a need for storage systems with targeted cybersecurity functionalities that can be integrated with the overall security ecosystem. ... Fortifying storage systems to withstand, adapt to, and recover from disruptions while maintaining the confidentiality, integrity, and availability of data. Cyber resiliency also includes auditing, monitoring, and the ability to recover promptly from cyber threats or incidents, encompassing strategies such as backup, redundancy, and rapid response mechanisms.


Four Steps To Develop Executive Presence

When it comes to emotional intelligence, being aware of your emotions and reading other people is crucial. Picking up nonverbal cues from others will enhance communication. For instance, when you are speaking and notice the other person’s eyes have "glazed over" or their expression looks blank, it communicates that they are not fully present. So stop speaking and wait a few seconds. Once you notice they are present again, there are several questions you can ask: "Where did I lose you?" or "Was there something I said that caught your attention?" ... Executive presence is not just about exuding self-confidence and authority; it is also about building strong relationships. In my last article on expanding the idea of leadership, I mention being other-focused, which is the opposite of being self-focused. Addressing those around you and showing genuine interest in them and what they are working on makes you more approachable and shows you care and are a good listener. And if they are struggling with something, empathizing before jumping in with a solution emphasizes all the above.


Maritime Cybersecurity: An Emerging Area of Concern for India

The International Maritime Organization (IMO) defines maritime cyber risk as a “measure of the extent to which a technology asset could be threatened by a potential circumstance or event, which may result in shipping-related operational, safety or security failures as a consequence of information or systems being corrupted, lost or compromised.” Maritime cybersecurity includes the systems overseeing ships’ operating software, navigation information, and traffic monitoring. However, the current cyber infrastructure available onboard civilian ships is not lacking in defensive cyber capabilities and tools. Maritime sector cyber threats have become serious due to the complex operationalization of IT and OT systems. These systems can be the subject of ransomware, malware, phishing, and man-in-the-middle (MITM) attacks. The motives behind such attacks can vary from traditional applications like naval warfare to espionage, to non-state causes like cyber terrorism, and hacktivism. Maritime cyberattacks can thus act as an instrument of foreign policy or be undertaken by criminal groups or individuals. This threat extends to onshore and offshore maritime assets. 


The Meeting of the Minds: Human and Artificial

At the intersection of human cognition and LLMs lies the complex domain of language, a common ground where the essence of our thoughts and the architecture of AI converge. Language serves as the bridge between these two realms, with its nuanced syntax, semantics, and pragmatics offering the basis for exploration and understanding. For humans, language is the vessel of consciousness, carrying the weight of our ideas, emotions, and cultural heritage. For LLMs, it is the structured data through which they learn, interpret, and generate responses, mirroring human-like patterns of communication. This shared linguistic foundation enables a unique dialogue between human intelligence and machine algorithms, fostering a collaborative exchange that enriches both the depth and breadth of our collective knowledge and interactions. ... Humans contribute a deep understanding characterized by subtlety, emotional insight, and creative thinking. In contrast, LLMs bring powerful data processing abilities, extensive memory capacity, and advanced pattern recognition. This combination doesn't merely enhance our cognitive abilities; it expands them, allowing for more thorough analysis and wider exploration in problem-solving and innovation.


Harnessing Real-time Data: Transforming Data Management with Artificial Intelligence

In the tech industry, “AI” has become a ubiquitous buzzword, often used in pitches regardless of the underlying technology. As an industry analyst focused on analytics and AI and co-author and contributing author on a number of AI books, including Augmented Intelligence and Causal AI, I have met dozens and dozens of companies that claim to offer AI solutions. I am direct with vendors and want to know how they are applying AI to customer needs. In addition, I press vendors on the depth of the AI/ML capabilities and how they approach the field. ... The need for applying AI to data management is clear and compelling. As organizations are inundated with data from myriad sources, the capacity to curate, process, and extract meaningful insights must scale. The volume of information generated by businesses makes AI a critical technology in helping data science teams make sense of new information. When I work with Chief Data Officers (CDOs), Chief Transformation Officers, and other executives tasked with driving change through data, it is clear that AI is the cornerstone of modern data management strategies. Unfortunately, traditional data ingestion and classification methods begin to fail under the pressures of real-time, high-volume demands. 


API Management: A Weak Link in the Cloud-Native Chain

API management encompasses API design, development, monitoring, testing and security, as well as making updates to APIs after they are in production. These tasks are important, of course, because APIs are everywhere today. They handle 83% of internet requests, according to Akamai, which means that keeping APIs documented, updated and monitored is a critical requirement for virtually any organization that deploys Internet-connected applications. Without an efficient and scalable means of managing APIs, it becomes difficult not just to defend against challenges like security risks involving APIs but also to guarantee a positive developer experience. The more time and toil your developers have to invest in API management, the less time they have to do the things they want to do and that matter most to the business – like developing cool apps and bringing them to market. ... APIs are not new, and most teams that support them have long had API management practices in place. However, in many cases, those practices were conceived in the era when monolithic application architectures and bare-metal servers or virtual machines dominated the IT landscape. 



Quote for the day:

''Sometimes it takes a good fall to really know where you stand.'' -- Hayley Williams

Daily Tech Digest - February 10, 2024

Managed Everything? Vendors Shift Focus to Services

In many ways, managed detection and response (MDR) covers a lot of ground and, so far, has done well for vendors and their customers. Vendors have happy clients, exceptionally rapid growth rate, and a very high margin for the service, Pollard says. Meanwhile, businesses can focus on the threats, leading to faster detection and response. Focusing on the data could improve the response time, but that is far from certain. However, no matter what telemetry, data, or devices a detection and response service focuses on to detect threats, businesses just want to focus on outcomes — detecting threats and preventing compromises, says Eric Kokonas, vice president at Sophos. "The truth is that the best applications of MDR are the result — not of strict adherence to a defined set of tools, telemetry sources, and services — but of an adaptable range of human-led capabilities that can be delivered and consumed in ways that are most compatible with organizations' needs and that are most likely to achieve the organizations' desired outcomes," Kokonas says. "Put more plainly, MDR services exist to achieve security and business outcomes the most optimal way possible." 


Meetings are about to get weird

And if you want nothing to do with it, I’ve got bad news: Apple Vision Pro users will be showing up soon in meetings as what Apple calls “Personas,” which are CGI-looking video representations that approach, but don’t cross into, the “uncanny valley” (that place of realism where a digital human or robot starts to creep people out). Critics are slamming the appearance of these “Personas,” but like all things, Apple will no doubt make them better with each iteration. Video meeting leader Zoom announced recently that the company’s flagship product will support Apple Vision Pro avatars with a new visionOS Zoom app. You’ll be able to remove meeting participants’ backgrounds and “pin” their real-time hologram anywhere in your physical workspace. ... The practice of using avatars in meetings will offer a huge advantage to employees with disabilities. Companies like Lenovo are spearheading the use of AI avatars to enable employees who otherwise might not be able to attend meetings. Once your visage has been digitized, there are other advantages. Lenovo developed a feature demonstrated at CES that enables you to step away from a meeting and have a digital version of yourself remain in the meeting, blinking and nodding as others talk.


The Linux Foundation and its partners are working on cryptography for the post-quantum world

Part of PQCA's mission is its commitment to the practical application of post-quantum cryptography. The alliance will spearhead technical projects, such as developing software for evaluating, prototyping, and deploying new post-quantum algorithms. In other words, the alliance seeks to bridge the gap between theoretical cryptography and its real-world implementation. One of PQCA's launch projects is the Open Quantum Safe project, which was founded at the University of Waterloo in 2014 and is one of the world's leading open-source software initiatives devoted to post-quantum cryptography. PQCA will also host the new PQ Code Package Project, which will build high-assurance, production-ready software implementations of forthcoming post-quantum cryptography standards, starting with the ML-KEM algorithm. All this effort matters because quantum computing is very much a mixed blessing. As Jon Felten, Cisco Systems' senior director of trustworthy technologies, said: "Quantum computing offers the potential to solve previously unapproachable problems, while simultaneously threatening many digital protections we take for granted."


Rethinking digital transformation beyond traditional verticals

The challenge extends beyond emerging markets, as Philippe points out, citing the global impact of the COVID-19 pandemic. “Many people in so-called richer countries are still deprived of access to life-changing digital services as are many people in so-called poorer countries,” he observes. The barriers include a lack of hardware to get access to such digital services, and training to use those services. However, Philippe is optimistic, noting that the necessary technology already exists and can significantly contribute to at least 8 of the 17 SDGs. “We have the data—we know what works and what doesn’t. Now, we need to scale this knowledge,” Philippe urges. He emphasizes the importance of collaboration, echoing the sentiment of the World Bank Group President, Ajay Banga, to replicate with pride. The goal is to ensure that the majority of each country’s population is not only aware of these services but also has easy access and knows how to utilize them. In expressing gratitude for the accolade received for the collective development work, Philippe acknowledges the contributions of partners, colleagues and thousands of committed individuals. 


Best practices for API error handling

Developers often run into errors during the API integration process. These errors might be caused by an incorrect implementation, a user action, or an internal server error on the API itself. It is important that developers handle these errors properly and present them to end-users in a direct, non-technical manner. The following best practices can lay the foundation for successful error handling during API integration—regardless of the API’s architectural pattern:Validate user input: Users sometimes provide invalid input data, which can lead to errors. Client-side validations help prevent this issue. Validations not only ensure that the user can see and fix the problem quicker, but also help the client and server conserve resources that would otherwise be expended on extra network traffic. Provide user-friendly messages: It’s important to avoid presenting error messages from the server directly to the end-user. Instead, these technical error messages should be simplified and made more user-friendly. They should also clearly tell the user how to fix the error. Handle multiple edge cases: Developers should understand the full range of errors an API can produce so that they can handle every edge case.


Agile myths busted by Adaptavist

"Agile doesn't scale" - This is a subject of ongoing debate in the Agile community. Many believe that the core tenets of Agile, like flexibility, customer focus, and valuing individuals over process fall when applied to large departments or organizations. In addition, scaling is often hindered by lousy architecture that breeds excessive dependencies and effort, which is exacerbated by creating dozens of teams. While Agile may seem daunting, starting small is key - after all, if you cannot be agile when the teams are small, scaling that non-agile approach is not likely to end well. Whether applying Agile to an individual team or department, focus on tangible successes and learn from any failures. Let teams discover effective practices rather than mandating rigid standards. ...  "There's one way to be Agile" - No universal standards exist. However, you can use the values and principles as guides for discussing how well your form of agile aligns with the Agile Manifesto. Your context matters, which is the intent behind the ambiguity about relative value between "the items on the left and the right." View existing frameworks as pointers that teams can selectively apply, rather than use them as proscriptive dogma. As your contexts evolve, so too must your agile practices.


Entrepreneurship for Engineers: Open Source Company Ethics

As the founder of a company, you have a series of ethical obligations to different stakeholders, both in your business as well as personal relationships, said Matt Butcher, co-founder and CEO of Fermyon, who also has a doctorate in philosophy. You have an ethical obligation to yourself to not burn out; you have an ethical obligation to your family to not neglect them, financially or otherwise. If you’re working 80-hour weeks, so busy you forget to pick up your kids from school and are living in poverty, you aren’t behaving ethically toward yourself or your family. When you’re running a business, you also have an ethical responsibility to your employees — as well as a legal responsibility. ... And then there are questions about data collection. Sometimes things can happen accidentally; you have a privacy-first project but have Google analytics embedded on your website, for example. “There are some people who will tell you that you are just evil because you say something, but you don’t do what you pretend to be,” said Gaël Duval, CEO of Murena, a de-Googled smartphone company.


10 best practices for implementing an effective data governance framework

Data governance isn’t just the mandate of the IT team, nor is it the sole responsibility of the legal department. Everyone must work together to make data governance an organizational priority. Creating a data governance council can ensure representation from all lines of business plus those stakeholders responsible for compliance, eDiscovery, and other data-related concerns. The council should be responsible for making key decisions, resolving conflicts, and updating the framework. ... Obtaining buy-in from executives is essential to the success of any initiative. Executive sponsorship not only ensures you have the resources necessary to support your program but also signals a commitment to prioritizing data governance throughout the organization. Establishing a direct line of communication can also help you overcome potential challenges during implementation.  ... You can’t optimize your data if you don’t know where it is. Create a comprehensive data map that outlines where your data is stored, how it flows through various systems, and how data sets are related to one another. This visual representation not only enhances transparency but also aids in identifying potential data risks and dependencies.


The Rise of “Quick and Dirty” DR Testing

IT/DR testing is still alive and well; however, these days it has evolved toward what you might call a “quick and dirty” approach. Quick because contemporary exercises place a strong emphasis on brevity in recognition of the new reality of employees’ shortened attention spans. Dirty because modern testing deemphasizes preparation and focuses on making exercises adhere as closely as possible to real-world conditions. Among the other new aspects of contemporary IT/DR tests is a new respect for the benefits of tabletop exercises. Necessity is the mother of invention, and the necessity of letting go of the traditional multi-day exercise has been driving productive innovations in the design and execution of tabletops. (MHA’s Richard Long has been a pioneer in this area, with his one-hour exercises focusing on a particular app or IT service and requiring participants to think on their feet.) These innovations have unlocked new powers in the tabletop in terms of identifying gaps and training staff. Other contemporary innovations include a focus on varying levels of testing complexity, the use of multiple strategies, the rise of tiered testing, and the development of methods to test today’s hybrid apps.


Are You a Lost Leader? Get Back on Track By Following These 4 Tips to Lead With Strength and Conviction

To lead well also requires you to walk the talk. It is important to apply your core values to leading yourself. For example, if one of your values is setting boundaries and making time for things that bring you joy, then be protective of that time. As CEO, I have demands of my time for nearly every hour of the day. In a hybrid and remote world, it's increasingly difficult to create healthy boundaries of time and space as there are often expectations to be on 24/7. Establishing boundaries to prioritize time for my family is non-negotiable. That time allocation might fluctuate in different seasons, depending on the needs of my family and the needs of business, but in the spirit of recognizing my core values, it certainly makes it to the top when priorities are determined. ... Owning your own choices is another key part of staying true to your values. It's important to understand what your true north is and hold yourself accountable for your choices — even when the path can be harder. I can't tell you the number of times people have asked, "How do you travel so much?" or "Why did you have kids if you were going to take a job like this?" First of all— wow. 



Quote for the day:

“You live longer once you realize that any time spent being unhappy is wasted.” -- Ruth E. Renkl

Daily Tech Digest - February 09, 2024

India’s data protection law: Reimagining a new era of innovation led digital markets

E-commerce platforms would have to make changes in their user interfaces of websites and apps, with clearer communication with users for consent, processing, erasing or grievance addressal. Moreover, the e-commerce platforms will have to completely erase all personal data when the user refutes the continuity of consent or when the purpose intended is served. The platforms will also have to now carry out a verifiable parental consent mechanism to provide services to children below 18 years of age but cannot track or carry out behavioural monitoring of the child, unless exempted separately by the government. This is a complex subject, as many e-commerce platforms already follow due checks for ensuring parental control below a certain age. Moreover, payments in e-commerce for principals below 18 years of age would anyway require guardianship of a parent or legal guardian, as per mandated by RBI . E-commerce players, however, will still need to adopt the additional obligations. For AI systems, which are now becoming increasingly integral to the operations of e-commerce platforms, this means a shift towards more transparent and ethical data usage practices. 


Key strategies for ISO 27001 compliance adoption

ISO 27001 fundamentally breaks down to: “What information security risks do we face? How should we best manage them?”Just as the chicken may come before the egg, note that what should happen in this case is that you identify the risks first and then select the controls that help to manage those risks. You definitely don’t have to apply all of the controls, and nearly all organisations treat some, validly, as non-applicable in their Statement of Applicability. For example, businesses where all employees work remotely simply don’t have the full range of risks that can benefit from mitigation by the physical controls. When it comes to performance evaluation, it’s largely a case of working through the relevant clauses and controls and agreeing how good a job the organisation is doing trying to meet the associated requirements. The ones that are selected for monitoring, measurement and evaluation will depend on the type and size of the organisation and its business objectives. These are basically key performance indicators (KPIs) for information security and might include supplier evaluations and documented events, incidents, and vulnerabilities.


Breach Roundup: US Bans AI Robocalls

Telecom regulators voted unanimously Thursday to make AI-generated robocalls illegal under the 1991 Telephone Consumer Protection Act, which prohibits robocalls from using "artificial" voices. The new rule allows the FCC to order telephone carriers not to facilitate illegal robocalls and empowers individual consumers or organizations to file lawsuits against violators. The decision comes amid concerns that AI could be used to disseminate misinformation about the election. A robocall featuring a deepfake of President Joe Biden urging voters in New Hampshire to stay home on primary day caused controversy in January. The New Hampshire attorney general on Tuesday said he had identified the source of the calls as Texas-based Life Corporation and its owner, Walter Monk. State Attorney General John M. Formella said the calls had been routed through a provider called Lingo Telecom, also based in Texas. New Hampshire issued a cease-and-desist order to Life Corporation, and the FCC sent a cease-and-desist letter to Lingo Telecom. "Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters," FCC Chairwoman Jessica Rosenworcel said in a statement.


Stifling Creativity in the Name of Data

Pitt challenges the notion that data and creativity are mutually exclusive. Builders should base decisions on both metrics and imaginative thinking. Focus obsessively on either, and you lose sight of the problem you aim to solve. "Data can contribute to developer improvement," Pitt concludes, "but developers should not solely rely on it." By the same token, visionaries in the throes of invention must temper flights of fancy with reality checks. Synthesis of human and machine intelligence unlocks maximum potential. But for Pitt, the human mind still reigns supreme when it comes to pushing boundaries and bringing new ideas to life. Software development draws its lifeblood from creative problem solvers who feel intrinsically rewarded by shipping inventive products. Data should inform and empower that mission, not impose limits or demand validation at every turn. The analytics will have their say, but imagination must lead the way. That balance, elusive as it may be, unlocks sustainable innovation as technology’s tides continue rising.


How Generative AI Will Change The Jobs Of Teachers

As generative AI reshapes the world of education, teachers will find their role evolving further away from being providers of knowledge and towards becoming learning facilitators. Perhaps the most significant shift in the role of educators will be an increased focus on nurturing skills such as critical thinking, creativity, and emotional intelligence. These skills will be paramount in a future where our worth is increasingly measured by our ability to perform tasks that machines cannot do or are not as proficient in. Beyond academic teaching, educators play a critical role in safeguarding the welfare of their students, a responsibility that extends far beyond the confines of traditional teaching. This involves not only protecting students from physical harm but also supporting their emotional and mental health, ensuring a safe and inclusive learning environment that fosters resilience and respect. The human touch provided by teachers becomes an indispensable pillar of education, emphasizing the irreplaceable value of empathy and understanding in nurturing well-rounded, emotionally secure individuals. Teachers will, of course, also have a very important role to play in making sure their students are able to use generative AI itself.


5 ways CIOs can help gen AI achieve its lightbulb moment

Being realistic means understanding the pros and cons and sharing this information with customers, employees, and peers in the C-suite. They’ll also appreciate your candor. Make an authoritative warts-and-all list so they can be clearly explained and understood. As AI advisors have pointed out, some downsides include the black box problem, AI’s vulnerability to misguided human arguments, hallucinations, and the list goes on. ... a corporate use policy and associated training can help educate employees on some risks and pitfalls of the technology, and provide rules and recommendations to get the most out of the tech, and, therefore, the most business value without putting the organization at risk. In developing your policy, be sure to include all relevant stakeholders, consider how gen AI is used today within your organization and how it may be used in the future, and share broadly across the organization. You’ll want to make the policy a living document and update it on a suitable cadence as needed. Having this policy in place can help to protect against a number of risks concerning contracts, cybersecurity, data privacy, deceptive trade practice, discrimination, disinformation, ethics, IP, and validation.


Why companies are leaving the cloud

The cloud had no way of delivering on the hype of 2010 to 2015 that gushed about lower costs, better agility, and better innovation. Well, two out of three is not bad, right? The cost of the cloud is where things usually go off the rails. The cloud is still the most convenient platform for building and deploying new systems, such as generative AI, and it also has the latest and greatest of pretty much everything. However, when enterprises run workloads and data sets using traditional infrastructure patterns, such as business applications that process and store data the same way they did when on-premises, there is a negative cost impact to using a public cloud. In other words, those who attempted to use the cloud as a simple host for their workloads and took no steps to optimize those workloads for their new location had much larger bills than expected. Moreover, they didn’t gain any real advantage by leveraging a public cloud for those specific workloads. The cloud is a good fit for modern applications that leverage a group of services, such as serverless, containers, or clustering. However, that doesn’t describe most enterprise applications.


The EU’s Artificial Intelligence Act, explained

In terms of data governance and protection, the EU Artificial Intelligence Act aligns with existing EU data protection laws, including the General Data Protection Regulation (GDPR), to ensure the ethical handling of personal data in AI systems. This includes provisions for data quality, security and privacy, ensuring that AI systems process data in a manner that respects user privacy and data protection rights. The act also provides specific guidelines for biometric identification, stressing the importance of safeguarding personal privacy and security, particularly in the handling of sensitive biometric data. Additionally, it categorizes certain AI systems as high-risk, necessitating stringent compliance and oversight to mitigate potential harms and risks associated with their use. The act establishes specific criteria for identifying and regulating high-risk AI systems. These criteria focus on AI applications that have significant implications for individuals’ rights and safety, like those used in critical infrastructure, employment and essential public services. The regulation mandates strict compliance standards and certification requirements for these systems, ensuring they meet high levels of safety, transparency and accountability. 


US creates advisory group to consider AI regulation

The consortium “will ensure America is at the front of the pack” in setting AI safety standards while encouraging innovation, US Secretary of Commerce Gina Raimondo said in a statement. “Together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.” In addition to the announcement of the new consortium, the Biden administration this week named Elizabeth Kelly, a former economic policy adviser to the president, as director of the newly formed US Artificial Intelligence Safety Institute (USAISI), an organization within NIST that will house AISIC. It’s unclear whether the coalition’s work will lead to regulations or new laws. While President Joe Biden issued an Oct. 30 executive order on AI safety, the timeline for the consortium’s work is up in the air. Furthermore, if Biden loses the presidential election later this year, momentum for AI regulations could stall. However, Biden’s recent executive order suggests some regulation is needed. “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks,” the executive order says. 


Chinese Hackers Preparing 'Destructive Attacks,' CISA Warns

The report says that Chinese hackers have exfiltrated diagrams and documentation related to operational technology, including SCADA systems, relays and switchgear - data "crucial for understanding and potentially impacting critical infrastructure systems," CISA said. Volt Typhoon actors in some cases had the capability to access camera surveillance systems at critical infrastructure facilities, it also said. The U.S. government and the Five Eyes intelligence-sharing alliance first publicly disclosed the existence of Volt Typhoon in May after cyber defenders had detected activity in Guam and the United States. The Pacific island is just hours away from Taiwan via airplane and is the site of two major American military bases. Microsoft, which also divulged the existence of Volt Typhoon in May, said the group has been active since mid-2021. ... "The information that we are releasing with this advisory is reflecting a strategic shift in PRC malicious cyber activity," Goldstein said. CISA has observed Chinese hacking groups moving away from espionage campaigns toward "prepositioning for future disruptive or destructive attacks," he added.



Quote for the day:

"All you need is ignorance and confidence and the success is sure." -- Mark Twain