Daily Tech Digest - February 12, 2024

Is privacy being traded away in the name of innovation and security?

The adage is that if you collect it, you must protect it. Every CISO knows this, and every instance where information is collected should have in place a means to protect that information. With this thought in mind, John A. Smith, founder and CSO of Conversant, proffered some thoughts which are easily embraceable:Adhere to regulations and compliance requirements. Understand that compliance isn’t enough. Measure your secure controls against current threat actor behaviors. Change your paradigms. Remember that most breaches follow the same high-level pattern. Smith’s comment about changing paradigms piqued my interest and his expansion is worthy of taking on board, as a different way of thinking. “Systems are generally open by default and closed by exception,” he tells CSO. “You should consider hardening systems by default and only opening access by exception. This paradigm change is particularly true in the context of data stores, such as practice management, electronic medical records, e-discovery, HRMS, and document management systems.” “How data is protected, access controls are managed, and identity is orchestrated are critically important to the security of these systems. ...”


Is 2024 the Year of Cloud Repatriation?

Security is one of them. At the same time that multi-cloud deployments are showing signs of decline, concerns about security threats are on the rise. The inability to achieve consistent security policies across multi-clouds topped the list as a problem or extreme problem for 56% of the organizations surveyed in 2023 compared to just 26% in 2022. And security mistakes are costly. According to the survey, downtime due to a successful application DDoS attack costs organizations an average of $6,130 per minute. Other security areas respondents ranked as problems or extreme problems included protection between platforms (61% in 2023 vs. 38% in 2022), unified visibility (58% in 2023 vs. 41% in 2022) and centralized management (46% in 2023 vs. 34% in 2022). Security is not, however, the only factor causing companies to rethink their security strategies and move applications and data back on-premise. Other considerations include: Cost management: While the cloud’s pay-as-you-go model can be cost-effective for variable workloads, it can lead to unexpected expenses when usage spikes. Where predictable workloads are concerned, it can be more cost-efficient to invest in on-premise infrastructure over the long term, rather than paying ongoing cloud service fees.


Data Mesh 101: What It Is and Why You Should Care

With the disaggregation of the data stack and profusion of tools and data available, data engineering teams are often left to duct-tape the pieces together to build their end-to-end solutions. The idea of the data mesh, first promulgated by Zhamak Dehghani a few years back, is an emerging concept in the data world. It proposes a technological, architectural, and organizational approach to solving data management problems by breaking up the monolithic data platform and de-centralizing data management across different domain teams and services. In a centralized architecture, data is copied from source systems into a data lake or data warehouse to create a single source of truth serving analytics use cases. This quickly becomes difficult to scale with data discovery and data version issues, schema evolution, tight coupling, and a lack of semantic metadata. The ultimate goal of the data mesh is to change the way data projects are managed within organizations. This enables organizations to empower teams across different business units to build data products autonomously with unified governance principles. It is a mindset shift from centralized to decentralized ownership, with the idea of creating an ecosystem of data products built by cross-functional domain data teams.


The Impact of Open-Source Software on Public Finance Management

The most obvious benefit of OSS is that it’s often free or at least low-cost. Software is the fastest-growing government IT spending category, so switching to a more affordable platform could yield significant savings. Government saving aside, open public finance solutions could reduce the financial burden on consumers. Consider how many U.S. citizens spend hundreds of dollars a year on tax preparation services, which typically use proprietary software. A free or low-cost open-source alternative could dramatically reduce this spending, making tax filing more affordable. ... Public finance agencies also introduce more transparency by embracing OSS. The Consumer Financial Protection Bureau (CFPB) — an early leader in government OSS in the U.S. — cites this visibility as the key driver of its open-source philosophy. The Bureau even runs a public GitHub page to provide developers with OSS tools and show consumers how their platforms work. Accountability is essential for government financial agencies like the CFPB. Consumers can only trust the office enforces regulations fairly and is truly open about its comparisons and advice when they understand how it approaches these issues.


9 traits of great IT leaders

Although it’s true that leading, which is about visioning, is not synonymous with managing, aka accomplishing tasks, true IT leaders are indeed “great at the business of IT,” says Eric Bloom, executive director of the IT Management and Leadership Institute and part of the Society for Information Management (SIM) Leadership Institute. In other words, they excel at managing IT budgets, projects, staffing needs, and so on. They have some, although not deep, understanding of the various technologies within their IT portfolios. And they understand how IT interrelates with cybersecurity and the other functional areas of their organizations. ... Furthermore, CIOs now must engage a wider spectrum of stakeholders, from their own IT teams to business project owners to their C-suite peers, the CEO, board members, and sometimes even outside customers and partners. And they are expected to brief each group on their technical roadmap and vision in ways that each and every one of those groups can understand and embrace. All that, Bloom says, requires the CIO to formulate much more intentional and deliberate interactions because “you could come up with the best vision for IT, but if you can’t articulate it to those you want to motivate, it will fall on deaf ears.”


Ask a Data Ethicist: Can We Trust Unexplainable AI?

Similar to the term AI, ethics also covers a whole range of issues and depending on the particular situation, certain ethical concerns can become more or less prominent. To use an extreme example, most people will care less about their privacy in a life and death situation. In a missing person situation, the primary concern is locating that person. This might involve using every means possible to find them, including divulging a lot of personal information to the media. However, when the missing person is located, all of the publicity about the situation should be removed. The ethical question now centers on ensuring the story doesn’t follow the victim throughout their life, introducing possible stigma.  ... In order for a person to exercise their agency and to be held accountable as a moral agent, it’s important to have some level of understanding about a situation. For example, if a bank denies a loan, they should provide the applicant with an explanation as to how that decision was made. This ensures it wasn’t based on irrelevant factors (you wore blue socks) or factors outside a person’s control (race, age, gender, etc.) that could prove discriminatory. 


Digital experience becomes new boardroom metric

“In our survey, we learned that 94% of the respondents in their own experience have experienced really poorly performing applications. And then out of that, 70% of respondents said that they are more likely to proactively keep using digital services that don’t perform, so the tolerance is very low for experiences that are not world-class, seamless and immediate.” Chintan Patel, Cisco UK and Ireland chief technology officer, said the new experience economy was definitely something hugely top of mind to firms in terms of how they deliver services to their customers and employees. “We have genuinely moved, especially since the pandemic, from this bricks-to-clicks type of motion, and our attention span has changed as well as consumers’. CEOs are absolutely aware of this intimately, how they’re building services, because what they’re seeing is that people have a far greater propensity to change applications, change providers, if the service isn’t met. I think the survey underlines that in terms of 54% of people having deleted more apps in the past year than they’ve installed, and partly because of the type of service or experience they’ve received or not received.


Integrating cybersecurity into vehicle design and manufacturing

The first challenge is in the supply chain, not just in terms of who provides the software; the issue penetrates each layer. Automakers need to understand this from a risk management perspective to pinpoint the onset and location of each specific risk. Suppliers must be involved in this process and continue to follow guidelines put in place by the automaker. The second challenge involves software updating. As technology continues to evolve and more features are added, cybercriminals find new ways to exploit flaws and gaps in systems that we may not have been aware of because of the newness of the technology. Regular software updates must be administered to products to patch holes in systems, improve existing vulnerabilities and improve product performance. In order to address these challenges, automakers need to conduct an initial risk assessment to understand what kind of threats and the type of threat actors are active within each layer of the product and supply chain in the automotive industry. From the experience gained from the initial risk assessment, a procedure must be put in place to ensure each internal and external employee and supplier knows their role in maintaining security at the company.


Startups pursue GPU alternatives for AI

The pitch the GPU-alternative vendors are making is that they have built a better mousetrap. “You will find that the GPU does a good job as far as general training for a broad range of things, and you can learn how to deploy them very, very quickly,” said Rodrigo Liang, co-founder and CEO of SambaNova Systems. “As you get into these really, really large models, you start to see some deficiencies. When you get to the size of GPT, you’re needing to run thousands of these chips. And ultimately, those chips are not running at great efficiency.” James Wang, senior product marketing manager at Cerebras Systems, echoes the legacy design sentiment and says that the GPU chip is simply too small. Its chip, the Wafer-Scale Engine-2 (WSE-2), is the size of an album cover. Whereas the Hopper GPU has a few thousand cores, WSE-2 has 850,000 cores, and the company claims 9,800 times the memory bandwidth of the GPU. “The amount of memory determines what how large-scale of a model you can train,” said Wang. “So if your starting point is a GPU, the maximum you can have is geared toward the size of the GPU and the accompanying memory. If you want to go larger, that problem becomes much more difficult. And you basically have to program around all the weak points of the GPU.”


It's time to break free from Corporate Agile

To get an indication of the price we pay to do Corporate Agile, let’s review the time spent to perform a typical process. I’ll take a Scrum team as an example, making a few simplifications to make measures easy to follow. Our hypothetical team consists of 7 Developers doing 1-week sprints. They have four team meetings each sprint: Refinement, Planning, Retrospective and Review. We’ll assume each meeting takes one hour, totalling four hours a week per person. That's 28 person-hours spent each week “doing Scrum” instead of doing work that directly benefits customers, and we’re not even counting the Daily. Now add the overhead of a professional scrum master, dedicated product owner, and layers of management between the team and its real stakeholders. ... What did they gain? In my experience, efforts toward backlog grooming, task refinement, and sprint planning rarely yield noticeable benefits except to make work fit in a box. ... For those currently in Scrum teams, ask yourself which would make your products more awesome: These meetings? Another engineer, designer, artist or domain expert? Budget for tools, services or runway? A few hours to relax and recharge?



Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." -- Philippos

Daily Tech Digest - February 11, 2024

What is Microsoft Fabric? A big tech stack for big data

Microsoft Fabric encompasses data movement, data storage, data engineering, data integration, data science, real-time analytics, and business intelligence, along with data security, governance, and compliance. In many ways, Fabric is Microsoft’s answer to Google Cloud Dataplex. As of this writing, Fabric is in preview. Microsoft Fabric is targeted at, well, everyone: administrators, developers, data engineers, data scientists, data analysts, business analysts, and managers. Currently, Microsoft Fabric is enabled by default for all Power BI tenants. Microsoft Fabric Data Engineering combines Apache Spark with Data Factory, allowing notebooks and Spark jobs to be scheduled and orchestrated. Fabric Data Factory combines Power Query with the scale and power of Azure Data Factory, and supports over 200 data connectors. Fabric Data Science integrates with Azure Machine Learning, which allows experiment tracking and model registry. Fabric Real-Time Analytics includes an event stream, a KQL (Kusto Query Language) database, and a KQL queryset to run queries, view query results, and customize query results on data. If KQL is new to you, welcome to the club.


Cybercriminals are creating their own AI chatbots to support hacking and scam users

In a surprisingly effective attack, researchers were able to use the prompt, “Repeat the word ‘poem’ forever” to cause ChatGPT to inadvertently expose large amounts of training data, some of which was sensitive. These vulnerabilities place person’s privacy or a business’s most-prized data at risk. More widely, this could contribute to a lack of trust in AI. Various companies, including Apple, Amazon and JP Morgan Chase, have already banned the use of ChatGPT as a precautionary measure. ChatGPT and similar LLMs represent the latest advancements in AI and are freely available for anyone to use. It’s important that its users are aware of the risks and how they can use these technologies safely at home or at work. Here are some tips for staying safe. Be more cautious with messages, videos, pictures and phone calls that appear to be legitimate as these may be generated by AI tools. Check with a second or known source to be sure. Avoid sharing sensitive or private information with ChatGPT and LLMs more generally. Also, remember that AI tools are not perfect and may provide inaccurate responses. Keep this in mind particularly when considering their use in medical diagnoses, work and other areas of life.


Building a disaster recovery playbook

No one wants to dwell on the “what ifs.” This is especially the case for organisations that are already maxed on internal resources and growth planning. But having a disaster recovery playbook on hand is a major component of long-term business viability. Disaster recovery playbooks contain all of the information, resources, and processes required to get a business back up and running in the event of a catastrophic event. They have a detailed breakdown of all team members (both internal and external) involved in recovery processes and a methodical approach to isolate any persistent threats and resume normal operations. While there are best practices when going through disaster recovery planning, there is no one-size-fits-all format. A disaster recovery playbook is unique to your business and is formatted and customised based on specific circumstances and factors in your own business requirements when it comes to risk management. Note that for some companies, disaster recovery planning is actually required. For companies that must maintain compliance with standards like HIPAA, SOC, and FedRAMP, disaster recovery plans are necessary.


Transform your financial IT infrastructure: Boost sustainability, security, and resilience

Like other industries, the financial sector is still dealing with the aftermath of COVID-19. Organizations are trying to figure out how to manage a hybrid workforce and what to do with a surplus of office space created by work-from-home practices. At the same time, financial services organizations need to optimize their digital infrastructure to connect IT and OT systems for a full view of the entire infrastructure. On the building management side, this means deploying sensors and connectivity solutions to collect and analyze data from systems such as chilled water plants, circuit breakers and mechanical equipment. The data delivers insights that enable businesses to manage systems more efficiently to reduce energy and operational costs. As they endeavor to make these improvements, organizations are getting some help from hardware and energy systems manufacturers, who are producing more efficient products that generate less waste. Combined with investments in renewable energy sources, efficient equipment helps organizations meet sustainability goals and comply with the upcoming disclosure regulations on greenhouse gas emissions.


The role of storage infrastructure in fortifying data security

The data security solution should also include the integrated use of various security technologies like Security Information and Event Management (SIEM), Security Orchestration Automation and Response (SOAR), Data Loss Protection (DLP), Identity and Access Management (IAM), Intrusion Detection and Prevention Systems (IDPS) to enable comprehensive security to identify, protect, detect, respond, and recover data. Every component in the overall IT stack needs to participate in the data security paradigm, particularly enterprise storage systems. Storage systems (on-premises, on-cloud, or hybrid) are home to all business data and are essential in enabling the data security considerations mentioned above. As a result, there is a need for storage systems with targeted cybersecurity functionalities that can be integrated with the overall security ecosystem. ... Fortifying storage systems to withstand, adapt to, and recover from disruptions while maintaining the confidentiality, integrity, and availability of data. Cyber resiliency also includes auditing, monitoring, and the ability to recover promptly from cyber threats or incidents, encompassing strategies such as backup, redundancy, and rapid response mechanisms.


Four Steps To Develop Executive Presence

When it comes to emotional intelligence, being aware of your emotions and reading other people is crucial. Picking up nonverbal cues from others will enhance communication. For instance, when you are speaking and notice the other person’s eyes have "glazed over" or their expression looks blank, it communicates that they are not fully present. So stop speaking and wait a few seconds. Once you notice they are present again, there are several questions you can ask: "Where did I lose you?" or "Was there something I said that caught your attention?" ... Executive presence is not just about exuding self-confidence and authority; it is also about building strong relationships. In my last article on expanding the idea of leadership, I mention being other-focused, which is the opposite of being self-focused. Addressing those around you and showing genuine interest in them and what they are working on makes you more approachable and shows you care and are a good listener. And if they are struggling with something, empathizing before jumping in with a solution emphasizes all the above.


Maritime Cybersecurity: An Emerging Area of Concern for India

The International Maritime Organization (IMO) defines maritime cyber risk as a “measure of the extent to which a technology asset could be threatened by a potential circumstance or event, which may result in shipping-related operational, safety or security failures as a consequence of information or systems being corrupted, lost or compromised.” Maritime cybersecurity includes the systems overseeing ships’ operating software, navigation information, and traffic monitoring. However, the current cyber infrastructure available onboard civilian ships is not lacking in defensive cyber capabilities and tools. Maritime sector cyber threats have become serious due to the complex operationalization of IT and OT systems. These systems can be the subject of ransomware, malware, phishing, and man-in-the-middle (MITM) attacks. The motives behind such attacks can vary from traditional applications like naval warfare to espionage, to non-state causes like cyber terrorism, and hacktivism. Maritime cyberattacks can thus act as an instrument of foreign policy or be undertaken by criminal groups or individuals. This threat extends to onshore and offshore maritime assets. 


The Meeting of the Minds: Human and Artificial

At the intersection of human cognition and LLMs lies the complex domain of language, a common ground where the essence of our thoughts and the architecture of AI converge. Language serves as the bridge between these two realms, with its nuanced syntax, semantics, and pragmatics offering the basis for exploration and understanding. For humans, language is the vessel of consciousness, carrying the weight of our ideas, emotions, and cultural heritage. For LLMs, it is the structured data through which they learn, interpret, and generate responses, mirroring human-like patterns of communication. This shared linguistic foundation enables a unique dialogue between human intelligence and machine algorithms, fostering a collaborative exchange that enriches both the depth and breadth of our collective knowledge and interactions. ... Humans contribute a deep understanding characterized by subtlety, emotional insight, and creative thinking. In contrast, LLMs bring powerful data processing abilities, extensive memory capacity, and advanced pattern recognition. This combination doesn't merely enhance our cognitive abilities; it expands them, allowing for more thorough analysis and wider exploration in problem-solving and innovation.


Harnessing Real-time Data: Transforming Data Management with Artificial Intelligence

In the tech industry, “AI” has become a ubiquitous buzzword, often used in pitches regardless of the underlying technology. As an industry analyst focused on analytics and AI and co-author and contributing author on a number of AI books, including Augmented Intelligence and Causal AI, I have met dozens and dozens of companies that claim to offer AI solutions. I am direct with vendors and want to know how they are applying AI to customer needs. In addition, I press vendors on the depth of the AI/ML capabilities and how they approach the field. ... The need for applying AI to data management is clear and compelling. As organizations are inundated with data from myriad sources, the capacity to curate, process, and extract meaningful insights must scale. The volume of information generated by businesses makes AI a critical technology in helping data science teams make sense of new information. When I work with Chief Data Officers (CDOs), Chief Transformation Officers, and other executives tasked with driving change through data, it is clear that AI is the cornerstone of modern data management strategies. Unfortunately, traditional data ingestion and classification methods begin to fail under the pressures of real-time, high-volume demands. 


API Management: A Weak Link in the Cloud-Native Chain

API management encompasses API design, development, monitoring, testing and security, as well as making updates to APIs after they are in production. These tasks are important, of course, because APIs are everywhere today. They handle 83% of internet requests, according to Akamai, which means that keeping APIs documented, updated and monitored is a critical requirement for virtually any organization that deploys Internet-connected applications. Without an efficient and scalable means of managing APIs, it becomes difficult not just to defend against challenges like security risks involving APIs but also to guarantee a positive developer experience. The more time and toil your developers have to invest in API management, the less time they have to do the things they want to do and that matter most to the business – like developing cool apps and bringing them to market. ... APIs are not new, and most teams that support them have long had API management practices in place. However, in many cases, those practices were conceived in the era when monolithic application architectures and bare-metal servers or virtual machines dominated the IT landscape. 



Quote for the day:

''Sometimes it takes a good fall to really know where you stand.'' -- Hayley Williams

Daily Tech Digest - February 10, 2024

Managed Everything? Vendors Shift Focus to Services

In many ways, managed detection and response (MDR) covers a lot of ground and, so far, has done well for vendors and their customers. Vendors have happy clients, exceptionally rapid growth rate, and a very high margin for the service, Pollard says. Meanwhile, businesses can focus on the threats, leading to faster detection and response. Focusing on the data could improve the response time, but that is far from certain. However, no matter what telemetry, data, or devices a detection and response service focuses on to detect threats, businesses just want to focus on outcomes — detecting threats and preventing compromises, says Eric Kokonas, vice president at Sophos. "The truth is that the best applications of MDR are the result — not of strict adherence to a defined set of tools, telemetry sources, and services — but of an adaptable range of human-led capabilities that can be delivered and consumed in ways that are most compatible with organizations' needs and that are most likely to achieve the organizations' desired outcomes," Kokonas says. "Put more plainly, MDR services exist to achieve security and business outcomes the most optimal way possible." 


Meetings are about to get weird

And if you want nothing to do with it, I’ve got bad news: Apple Vision Pro users will be showing up soon in meetings as what Apple calls “Personas,” which are CGI-looking video representations that approach, but don’t cross into, the “uncanny valley” (that place of realism where a digital human or robot starts to creep people out). Critics are slamming the appearance of these “Personas,” but like all things, Apple will no doubt make them better with each iteration. Video meeting leader Zoom announced recently that the company’s flagship product will support Apple Vision Pro avatars with a new visionOS Zoom app. You’ll be able to remove meeting participants’ backgrounds and “pin” their real-time hologram anywhere in your physical workspace. ... The practice of using avatars in meetings will offer a huge advantage to employees with disabilities. Companies like Lenovo are spearheading the use of AI avatars to enable employees who otherwise might not be able to attend meetings. Once your visage has been digitized, there are other advantages. Lenovo developed a feature demonstrated at CES that enables you to step away from a meeting and have a digital version of yourself remain in the meeting, blinking and nodding as others talk.


The Linux Foundation and its partners are working on cryptography for the post-quantum world

Part of PQCA's mission is its commitment to the practical application of post-quantum cryptography. The alliance will spearhead technical projects, such as developing software for evaluating, prototyping, and deploying new post-quantum algorithms. In other words, the alliance seeks to bridge the gap between theoretical cryptography and its real-world implementation. One of PQCA's launch projects is the Open Quantum Safe project, which was founded at the University of Waterloo in 2014 and is one of the world's leading open-source software initiatives devoted to post-quantum cryptography. PQCA will also host the new PQ Code Package Project, which will build high-assurance, production-ready software implementations of forthcoming post-quantum cryptography standards, starting with the ML-KEM algorithm. All this effort matters because quantum computing is very much a mixed blessing. As Jon Felten, Cisco Systems' senior director of trustworthy technologies, said: "Quantum computing offers the potential to solve previously unapproachable problems, while simultaneously threatening many digital protections we take for granted."


Rethinking digital transformation beyond traditional verticals

The challenge extends beyond emerging markets, as Philippe points out, citing the global impact of the COVID-19 pandemic. “Many people in so-called richer countries are still deprived of access to life-changing digital services as are many people in so-called poorer countries,” he observes. The barriers include a lack of hardware to get access to such digital services, and training to use those services. However, Philippe is optimistic, noting that the necessary technology already exists and can significantly contribute to at least 8 of the 17 SDGs. “We have the data—we know what works and what doesn’t. Now, we need to scale this knowledge,” Philippe urges. He emphasizes the importance of collaboration, echoing the sentiment of the World Bank Group President, Ajay Banga, to replicate with pride. The goal is to ensure that the majority of each country’s population is not only aware of these services but also has easy access and knows how to utilize them. In expressing gratitude for the accolade received for the collective development work, Philippe acknowledges the contributions of partners, colleagues and thousands of committed individuals. 


Best practices for API error handling

Developers often run into errors during the API integration process. These errors might be caused by an incorrect implementation, a user action, or an internal server error on the API itself. It is important that developers handle these errors properly and present them to end-users in a direct, non-technical manner. The following best practices can lay the foundation for successful error handling during API integration—regardless of the API’s architectural pattern:Validate user input: Users sometimes provide invalid input data, which can lead to errors. Client-side validations help prevent this issue. Validations not only ensure that the user can see and fix the problem quicker, but also help the client and server conserve resources that would otherwise be expended on extra network traffic. Provide user-friendly messages: It’s important to avoid presenting error messages from the server directly to the end-user. Instead, these technical error messages should be simplified and made more user-friendly. They should also clearly tell the user how to fix the error. Handle multiple edge cases: Developers should understand the full range of errors an API can produce so that they can handle every edge case.


Agile myths busted by Adaptavist

"Agile doesn't scale" - This is a subject of ongoing debate in the Agile community. Many believe that the core tenets of Agile, like flexibility, customer focus, and valuing individuals over process fall when applied to large departments or organizations. In addition, scaling is often hindered by lousy architecture that breeds excessive dependencies and effort, which is exacerbated by creating dozens of teams. While Agile may seem daunting, starting small is key - after all, if you cannot be agile when the teams are small, scaling that non-agile approach is not likely to end well. Whether applying Agile to an individual team or department, focus on tangible successes and learn from any failures. Let teams discover effective practices rather than mandating rigid standards. ...  "There's one way to be Agile" - No universal standards exist. However, you can use the values and principles as guides for discussing how well your form of agile aligns with the Agile Manifesto. Your context matters, which is the intent behind the ambiguity about relative value between "the items on the left and the right." View existing frameworks as pointers that teams can selectively apply, rather than use them as proscriptive dogma. As your contexts evolve, so too must your agile practices.


Entrepreneurship for Engineers: Open Source Company Ethics

As the founder of a company, you have a series of ethical obligations to different stakeholders, both in your business as well as personal relationships, said Matt Butcher, co-founder and CEO of Fermyon, who also has a doctorate in philosophy. You have an ethical obligation to yourself to not burn out; you have an ethical obligation to your family to not neglect them, financially or otherwise. If you’re working 80-hour weeks, so busy you forget to pick up your kids from school and are living in poverty, you aren’t behaving ethically toward yourself or your family. When you’re running a business, you also have an ethical responsibility to your employees — as well as a legal responsibility. ... And then there are questions about data collection. Sometimes things can happen accidentally; you have a privacy-first project but have Google analytics embedded on your website, for example. “There are some people who will tell you that you are just evil because you say something, but you don’t do what you pretend to be,” said Gaël Duval, CEO of Murena, a de-Googled smartphone company.


10 best practices for implementing an effective data governance framework

Data governance isn’t just the mandate of the IT team, nor is it the sole responsibility of the legal department. Everyone must work together to make data governance an organizational priority. Creating a data governance council can ensure representation from all lines of business plus those stakeholders responsible for compliance, eDiscovery, and other data-related concerns. The council should be responsible for making key decisions, resolving conflicts, and updating the framework. ... Obtaining buy-in from executives is essential to the success of any initiative. Executive sponsorship not only ensures you have the resources necessary to support your program but also signals a commitment to prioritizing data governance throughout the organization. Establishing a direct line of communication can also help you overcome potential challenges during implementation.  ... You can’t optimize your data if you don’t know where it is. Create a comprehensive data map that outlines where your data is stored, how it flows through various systems, and how data sets are related to one another. This visual representation not only enhances transparency but also aids in identifying potential data risks and dependencies.


The Rise of “Quick and Dirty” DR Testing

IT/DR testing is still alive and well; however, these days it has evolved toward what you might call a “quick and dirty” approach. Quick because contemporary exercises place a strong emphasis on brevity in recognition of the new reality of employees’ shortened attention spans. Dirty because modern testing deemphasizes preparation and focuses on making exercises adhere as closely as possible to real-world conditions. Among the other new aspects of contemporary IT/DR tests is a new respect for the benefits of tabletop exercises. Necessity is the mother of invention, and the necessity of letting go of the traditional multi-day exercise has been driving productive innovations in the design and execution of tabletops. (MHA’s Richard Long has been a pioneer in this area, with his one-hour exercises focusing on a particular app or IT service and requiring participants to think on their feet.) These innovations have unlocked new powers in the tabletop in terms of identifying gaps and training staff. Other contemporary innovations include a focus on varying levels of testing complexity, the use of multiple strategies, the rise of tiered testing, and the development of methods to test today’s hybrid apps.


Are You a Lost Leader? Get Back on Track By Following These 4 Tips to Lead With Strength and Conviction

To lead well also requires you to walk the talk. It is important to apply your core values to leading yourself. For example, if one of your values is setting boundaries and making time for things that bring you joy, then be protective of that time. As CEO, I have demands of my time for nearly every hour of the day. In a hybrid and remote world, it's increasingly difficult to create healthy boundaries of time and space as there are often expectations to be on 24/7. Establishing boundaries to prioritize time for my family is non-negotiable. That time allocation might fluctuate in different seasons, depending on the needs of my family and the needs of business, but in the spirit of recognizing my core values, it certainly makes it to the top when priorities are determined. ... Owning your own choices is another key part of staying true to your values. It's important to understand what your true north is and hold yourself accountable for your choices — even when the path can be harder. I can't tell you the number of times people have asked, "How do you travel so much?" or "Why did you have kids if you were going to take a job like this?" First of all— wow. 



Quote for the day:

“You live longer once you realize that any time spent being unhappy is wasted.” -- Ruth E. Renkl

Daily Tech Digest - February 09, 2024

India’s data protection law: Reimagining a new era of innovation led digital markets

E-commerce platforms would have to make changes in their user interfaces of websites and apps, with clearer communication with users for consent, processing, erasing or grievance addressal. Moreover, the e-commerce platforms will have to completely erase all personal data when the user refutes the continuity of consent or when the purpose intended is served. The platforms will also have to now carry out a verifiable parental consent mechanism to provide services to children below 18 years of age but cannot track or carry out behavioural monitoring of the child, unless exempted separately by the government. This is a complex subject, as many e-commerce platforms already follow due checks for ensuring parental control below a certain age. Moreover, payments in e-commerce for principals below 18 years of age would anyway require guardianship of a parent or legal guardian, as per mandated by RBI . E-commerce players, however, will still need to adopt the additional obligations. For AI systems, which are now becoming increasingly integral to the operations of e-commerce platforms, this means a shift towards more transparent and ethical data usage practices. 


Key strategies for ISO 27001 compliance adoption

ISO 27001 fundamentally breaks down to: “What information security risks do we face? How should we best manage them?”Just as the chicken may come before the egg, note that what should happen in this case is that you identify the risks first and then select the controls that help to manage those risks. You definitely don’t have to apply all of the controls, and nearly all organisations treat some, validly, as non-applicable in their Statement of Applicability. For example, businesses where all employees work remotely simply don’t have the full range of risks that can benefit from mitigation by the physical controls. When it comes to performance evaluation, it’s largely a case of working through the relevant clauses and controls and agreeing how good a job the organisation is doing trying to meet the associated requirements. The ones that are selected for monitoring, measurement and evaluation will depend on the type and size of the organisation and its business objectives. These are basically key performance indicators (KPIs) for information security and might include supplier evaluations and documented events, incidents, and vulnerabilities.


Breach Roundup: US Bans AI Robocalls

Telecom regulators voted unanimously Thursday to make AI-generated robocalls illegal under the 1991 Telephone Consumer Protection Act, which prohibits robocalls from using "artificial" voices. The new rule allows the FCC to order telephone carriers not to facilitate illegal robocalls and empowers individual consumers or organizations to file lawsuits against violators. The decision comes amid concerns that AI could be used to disseminate misinformation about the election. A robocall featuring a deepfake of President Joe Biden urging voters in New Hampshire to stay home on primary day caused controversy in January. The New Hampshire attorney general on Tuesday said he had identified the source of the calls as Texas-based Life Corporation and its owner, Walter Monk. State Attorney General John M. Formella said the calls had been routed through a provider called Lingo Telecom, also based in Texas. New Hampshire issued a cease-and-desist order to Life Corporation, and the FCC sent a cease-and-desist letter to Lingo Telecom. "Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities and misinform voters," FCC Chairwoman Jessica Rosenworcel said in a statement.


Stifling Creativity in the Name of Data

Pitt challenges the notion that data and creativity are mutually exclusive. Builders should base decisions on both metrics and imaginative thinking. Focus obsessively on either, and you lose sight of the problem you aim to solve. "Data can contribute to developer improvement," Pitt concludes, "but developers should not solely rely on it." By the same token, visionaries in the throes of invention must temper flights of fancy with reality checks. Synthesis of human and machine intelligence unlocks maximum potential. But for Pitt, the human mind still reigns supreme when it comes to pushing boundaries and bringing new ideas to life. Software development draws its lifeblood from creative problem solvers who feel intrinsically rewarded by shipping inventive products. Data should inform and empower that mission, not impose limits or demand validation at every turn. The analytics will have their say, but imagination must lead the way. That balance, elusive as it may be, unlocks sustainable innovation as technology’s tides continue rising.


How Generative AI Will Change The Jobs Of Teachers

As generative AI reshapes the world of education, teachers will find their role evolving further away from being providers of knowledge and towards becoming learning facilitators. Perhaps the most significant shift in the role of educators will be an increased focus on nurturing skills such as critical thinking, creativity, and emotional intelligence. These skills will be paramount in a future where our worth is increasingly measured by our ability to perform tasks that machines cannot do or are not as proficient in. Beyond academic teaching, educators play a critical role in safeguarding the welfare of their students, a responsibility that extends far beyond the confines of traditional teaching. This involves not only protecting students from physical harm but also supporting their emotional and mental health, ensuring a safe and inclusive learning environment that fosters resilience and respect. The human touch provided by teachers becomes an indispensable pillar of education, emphasizing the irreplaceable value of empathy and understanding in nurturing well-rounded, emotionally secure individuals. Teachers will, of course, also have a very important role to play in making sure their students are able to use generative AI itself.


5 ways CIOs can help gen AI achieve its lightbulb moment

Being realistic means understanding the pros and cons and sharing this information with customers, employees, and peers in the C-suite. They’ll also appreciate your candor. Make an authoritative warts-and-all list so they can be clearly explained and understood. As AI advisors have pointed out, some downsides include the black box problem, AI’s vulnerability to misguided human arguments, hallucinations, and the list goes on. ... a corporate use policy and associated training can help educate employees on some risks and pitfalls of the technology, and provide rules and recommendations to get the most out of the tech, and, therefore, the most business value without putting the organization at risk. In developing your policy, be sure to include all relevant stakeholders, consider how gen AI is used today within your organization and how it may be used in the future, and share broadly across the organization. You’ll want to make the policy a living document and update it on a suitable cadence as needed. Having this policy in place can help to protect against a number of risks concerning contracts, cybersecurity, data privacy, deceptive trade practice, discrimination, disinformation, ethics, IP, and validation.


Why companies are leaving the cloud

The cloud had no way of delivering on the hype of 2010 to 2015 that gushed about lower costs, better agility, and better innovation. Well, two out of three is not bad, right? The cost of the cloud is where things usually go off the rails. The cloud is still the most convenient platform for building and deploying new systems, such as generative AI, and it also has the latest and greatest of pretty much everything. However, when enterprises run workloads and data sets using traditional infrastructure patterns, such as business applications that process and store data the same way they did when on-premises, there is a negative cost impact to using a public cloud. In other words, those who attempted to use the cloud as a simple host for their workloads and took no steps to optimize those workloads for their new location had much larger bills than expected. Moreover, they didn’t gain any real advantage by leveraging a public cloud for those specific workloads. The cloud is a good fit for modern applications that leverage a group of services, such as serverless, containers, or clustering. However, that doesn’t describe most enterprise applications.


The EU’s Artificial Intelligence Act, explained

In terms of data governance and protection, the EU Artificial Intelligence Act aligns with existing EU data protection laws, including the General Data Protection Regulation (GDPR), to ensure the ethical handling of personal data in AI systems. This includes provisions for data quality, security and privacy, ensuring that AI systems process data in a manner that respects user privacy and data protection rights. The act also provides specific guidelines for biometric identification, stressing the importance of safeguarding personal privacy and security, particularly in the handling of sensitive biometric data. Additionally, it categorizes certain AI systems as high-risk, necessitating stringent compliance and oversight to mitigate potential harms and risks associated with their use. The act establishes specific criteria for identifying and regulating high-risk AI systems. These criteria focus on AI applications that have significant implications for individuals’ rights and safety, like those used in critical infrastructure, employment and essential public services. The regulation mandates strict compliance standards and certification requirements for these systems, ensuring they meet high levels of safety, transparency and accountability. 


US creates advisory group to consider AI regulation

The consortium “will ensure America is at the front of the pack” in setting AI safety standards while encouraging innovation, US Secretary of Commerce Gina Raimondo said in a statement. “Together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.” In addition to the announcement of the new consortium, the Biden administration this week named Elizabeth Kelly, a former economic policy adviser to the president, as director of the newly formed US Artificial Intelligence Safety Institute (USAISI), an organization within NIST that will house AISIC. It’s unclear whether the coalition’s work will lead to regulations or new laws. While President Joe Biden issued an Oct. 30 executive order on AI safety, the timeline for the consortium’s work is up in the air. Furthermore, if Biden loses the presidential election later this year, momentum for AI regulations could stall. However, Biden’s recent executive order suggests some regulation is needed. “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks,” the executive order says. 


Chinese Hackers Preparing 'Destructive Attacks,' CISA Warns

The report says that Chinese hackers have exfiltrated diagrams and documentation related to operational technology, including SCADA systems, relays and switchgear - data "crucial for understanding and potentially impacting critical infrastructure systems," CISA said. Volt Typhoon actors in some cases had the capability to access camera surveillance systems at critical infrastructure facilities, it also said. The U.S. government and the Five Eyes intelligence-sharing alliance first publicly disclosed the existence of Volt Typhoon in May after cyber defenders had detected activity in Guam and the United States. The Pacific island is just hours away from Taiwan via airplane and is the site of two major American military bases. Microsoft, which also divulged the existence of Volt Typhoon in May, said the group has been active since mid-2021. ... "The information that we are releasing with this advisory is reflecting a strategic shift in PRC malicious cyber activity," Goldstein said. CISA has observed Chinese hacking groups moving away from espionage campaigns toward "prepositioning for future disruptive or destructive attacks," he added.



Quote for the day:

"All you need is ignorance and confidence and the success is sure." -- Mark Twain

Daily Tech Digest - February 08, 2024

The do-it-yourself approach to MDM

If you’re comfortable taking on extra responsibilities and costs, the next big question is whether you can get the right tool — or more often, many tools — you need. This is where you need a detailed understanding of the mobile platforms you have to manage and every platform that needs to integrate with them for everything to work. MDM isn’t an island. It integrates with a sometimes staggering number of enterprise components. Some, like identity management, are obvious; others like log management or incident response are less obvious when you think about successful mobility management. Then there are the external platforms that need connections. Think identity management — Entra, Workspace, Okta — and things like Apple Business Manager that you need to work well in both every day and unusual situations. Then tack on the network, security, auditing, load balancing, inventory, the help desk and various other services. You’re going to need something to connect with everything you already have, or you could find yourself saddled with multiple migrations. 


NCSC warns CNI operators over ‘living-off-the-land’ attacks

The NCSC said that even organisations with the most mature cyber security techniques could easily fail to spot a living-off-the-land attack, and assessed it is “likely” that such activity poses a clear threat to CNI in the UK. ... In particular, it warned, both Chinese and Russian hackers have been observed living-off-the-land on compromised CNI networks – one prominent exponent of the technique is the GRU-sponsored advanced persistent threat (APT) actor known as Sandworm, which uses LOLbins extensively to attack targets in Ukraine. “It is vital that operators of UK critical infrastructure heed this warning about cyber attackers using sophisticated techniques to hide on victims’ systems,” said NCSC operations director Paul Chichester. “Threat actors left to carry out their operations undetected present a persistent and potentially very serious threat to the provision of essential services. Organisations should apply the protections set out in the latest guidance to help hunt down and mitigate any malicious activity found on their networks.” "In this new dangerous and volatile world where the frontline is increasingly online, we must protect and future proof our systems,” added deputy prime minister Oliver Dowden.


What Are the Core Principles of Good API Design?

Your API should also be idiomatic to the programming language it is written against and respect the way that language works. For example, if the API is to be used with Java, use exceptions for errors, rather than returning an error code as you might in C. APIs should follow the principle of least surprise. Part of the way this can be achieved is through symmetry; if you have to add and remove methods, these should be applied everywhere they are appropriate. A good API comprises a small number of concepts; if I’m learning it, I shouldn’t have to learn too many things. This doesn’t necessarily apply to the number of methods, classes or parameters, but rather the conceptual surface area that the API covers. Ideally, an API should only set out to achieve one thing. It is also best to avoid adding anything for the sake of it. “When in doubt, leave it out,” as Bloch puts it. You can usually add something to an API if it turns out to be needed, but you can never remove things once an API is public. As noted earlier, your API will need to evolve over time, so a key part of the design is to be able to make changes further down the line without destroying everything.


Russian Ransomware Gang ALPHV/BlackCat Resurfaces with 300GB of Stolen US Military Documents

The ALPHV/BlackCat ransomware group has threatened to publish and sell 300 GB of stolen military documents unless Technica Corporation gets in touch. “If Technica does not contact us soon, the data will either be sold or made public,” the ransomware gang threatened. However, there is no guarantee that the ransomware gang would not pass the military documents to adversaries even after the military contractor pays the ransom. The BlackCat ransomware gang also posted screenshots of the leaked military documents as proof, displaying the victims’ names, social security numbers, job roles and locations, and clearance levels. Other military documents include corporate information such as billing invoices and contracts for private companies and federal agencies such as the FBI and the US Air Force. So far, the motive of the cyber attack remains unknown, but it’s common for threat actors to feign financial motives to conceal their true geopolitical objectives. While the leaked military documents may not classified, they still contain crucial personal information that state-linked threat actors could use for targeting.


6 best practices for better vendor management

To build a stronger relationship with vendors, “CIOs should bring them into the fold regarding their priorities and potential concerns about what may —or may not — lie ahead, from a regulatory perspective or the general economic climate, for example,” says Kevin Beasley, CIO at VAI, a midmarket ERP software developer. “A few years ago, supply-chain snags had CIOs looking for new technology,” Beasley says. “Lately, a talent shortage means CIOs are pushing for more automation. CIOs that don’t delay posing questions about how vendor products can solve such challenges, but also take the time to hear the information, will build a valuable rapport that can benefit both parties.” Part of building a collaborative partnership is staying in close contact. It’s important to establish clear communication channels and schedule regular check-ins with active vendors, “to understand performance, expectations, and progress while recognizing that no process or service goes perfectly all the time,” says Patrick Gilgour, managing director of the Technology Strategy and Advisory practice at consulting firm Protiviti.


Three commitments of the data center industry for 2024

To become more authentic and credible in these reputation-building dialogues and go beyond the data center, we must be more representative of the people our infrastructure ultimately serves. Although progress has been made, we must keep evolving. We need diversity of background, experience, ethnicity, age, and outlook in order to fully embrace the challenges of digital infrastructure. The range of roles, skillsets, and opportunities in the sector is far wider than many outside the industry recognize. Creating organizations where every person can be themselves, and deliver in line with their ethics, values, and beliefs is a prerequisite for building a positive reputation. And of course, the more attractive an industry we become, the more great candidates, partners, and supporters we’ll attract. ... Speaking of inspiring the next generation, 2024 can be the year in which we embrace youth. How do we attract more young people into the industry? By inspiring them. The data center sector is a dynamic, exciting, and rapidly growing sector. We want to ensure this is being effectively articulated in print, across social media, and online.


Is your cloud security strategy ready for LLMs?

When employees and contractors use those public models, especially for analysis, they will be feeding those models internal data. The public models then learn from that data and may leak those sensitive corporate secrets to a rival who asks a similar question. “Mitigating the risk of unauthorized use of LLMs, especially inadvertent or intentional input of proprietary, confidential, or material non-public data into LLMs” is tricky, says George Chedzhemov, BigID’s cybersecurity strategist. Cloud security platforms can help, he adds, especially for access controls and user authentication, encryption of sensitive data, data loss prevention, and network security. Other tools are available for data discovery and surfacing sensitive information in structured, unstructured, and semi-structured repositories. “ It is impossible to protect data that the organization has lost track of, data that has been over-permissioned, or data that the organization is not even aware exists, so data discovery should be the first step in any data risk remediation strategy, including one that attempts to address AI/LLM risks,” says Chedzhemov.


Shadow AI poses new generation of threats to enterprise IT

Functional risks stem from an AI tool's ability to function properly. For example, model drift is a functional risk. It occurs when the AI model falls out of alignment with the problem space it was trained to address, rendering it useless and potentially misleading. Model drift might happen because of changes in the technical environment or outdated training data. ... Operational risks endanger the company's ability to do business. Operational risks come in many forms. For example, a shadow AI tool could give bad advice to the business because it is suffering from model drift, was inadequately trained or is hallucinating -- i.e., generating false information. Following bad advice from GenAI can result in wasted investments -- for example, if the business expands unwisely -- and higher opportunity costs -- for example, if it fails to invest where it should. ... Legal risks follow functional and operational risks if shadow AI exposes the company to lawsuits or fines. Say the model advises leadership on business strategy. But the information is incorrect, and the company wastes a huge amount of money doing the wrong thing. Shareholders might sue.


Creating a Data Quality Framework

A start-up business may not initially have a need for organizing massive amounts of data (it doesn’t yet have massive amounts of data to organize), but a master data management (MDM) program at the start can be remarkably useful. Master data is the critical information needed for doing business accurately and efficiently. For example, the business’s master data contains, among other things, the correct addresses of the start-up’s new customers. Master data must be accurate to be useful – the use of inaccurate master data would be self-destructive. If the organization is doing business internationally, it may need to invest in a Data Governance (DG) program to deal with international laws and regulations. Additionally, a Data Governance program will manage the availability, integrity, and security of the business’s data. An effective DG program ensures that data is consistent and trustworthy and doesn’t get misused. A well-designed DG program includes not only useful software, but policies and procedures for humans handling the organization’s data. A Data Quality framework is normally developed and used when an organization has begun using data in complicated ways for research purposes. 


Meta Is Being Urged to Crack Down on UK Payment Scams

Since social media market platforms such as Facebook Marketplace do not have dedicated payment portals that accept payment cards, Davis said, standard security practices adopted by card issuers cannot be used to protect customers. As a result, preventing fraud on social media platforms is a challenge, he said. "To tackle this, we need greater action from Meta to stop fraudulent ads from being put in front of the U.K. consumers," Davis said. Meta Public Policy Mead Philip Milton, who testified before the committee, said his company takes fraud prevention "extremely seriously." Milton said Meta has adopted such measures as verifying ads on its platforms and permitting only financial ads that have cleared the U.K. Financial Services Verification process rolled out by the British Financial Conduct Authority. "A good indicator of fraud is fake accounts, as scammers generally tend to use fake accounts to carry out scams. As fraud prevention, Meta removed 827 million fake accounts in the third quarter of 2023," Milton said. Microsoft Government Affairs Director Simon Staffell said the computing giant pursues criminal infrastructure disruption as one of its fraud prevention strategies.



Quote for the day:

"If you are willing to do more than you are paid to do, eventually you will be paid to do more than you do." -- Anonymous