Daily Tech Digest - November 08, 2022

Public sector IT projects need ethical data practices from start

“When you openly communicate your purpose – when you’re open about how you’re doing things, how data has been used – you instil the most valuable thing when doing data projects, which is trust,” he says. “We are probably in a place where people don’t trust organisations or the government with their data, and that’s a bad place to be… so the transparency, openness, communicating purpose, engaging with people is fairly key.” Ahmed adds that, even in situations where it is not possible to consult the public – for example, when systems are being built for law enforcement or intelligence purposes, or just internally for use by civil servants – openness among the internal stakeholders is still very important. All of this should also be documented: “To make [ethical] framework that’s suitable, to what you’re doing as an organisation, keep referring to it, refresh it, measure your outcomes, learn from it… the potential loss of revenue or reputation for an organisation or government departments are huge if it goes wrong. With ethics, the win is not getting things wrong, the win is having less negative impact or more positive impact.”


Securing APIs and Microservices in the Cloud

No matter what your role is within the software development and API process, you need to be thinking about the implications of your work. Are you being as secure as possible? Because we end up with delays, fails, or worse. What I mean by this is, obviously, you have delays or fails. That can just be annoying when you end up with everything behind schedule, over budget. Everyone a little bit stressed. It's more than just that. If we look at security over the last five years, it used to be a case where you would get your personal details stolen. It might be credit cards. It might be address, or in America, social security number. What you're starting to see now, especially in the last year or two, is a lot more ransomware. While before, it would be like, we're going to take your data, and that's obviously bad. People can replace credit cards. It's a bit harder to replace your social security, there has been large rises of fraud. It's bad. Ransomware is worse, because then you are totally locked out of your system. Then you are, as the name suggests, at ransom.


The Fiction of Sentient AI

Artificial intelligence has a unique possibility that tickles this part of the imagination: it seems familiar. We can recognize a part of ourselves in it: the systematic thinking part. It is rationality without irrationality. Pure reason in action where the results are always clean and binary. There is a non-human purity of thought in the elegance of its solutions. We have always yearned futilely for this purity — in our food, our philosophical quests, our relationships, even in our gods. But this purity, the simple and bold ease with which we solve at a rapid pace our most bothersome tasks creates both complacency and trepidation. The complacency is apparent. We have always, as a species, had an infinite capacity for self congratulation. The trepidation part is more intriguing. It exposes how poorly we have actually sized ourselves up, and reveals at once, both the fallacy of thought, its ambitions and its limitations. The fever-dream of sentient AI and the climax of our race is a trite template for thousands of sci-fi stories across several medium.


Ways to spot if your organisation has a false sense of security – and what to do about it

We tend to think of the security team as solely responsible for all aspects of systems security implementation and management, with the IT team more focused on enabling the workstreams of the organisation through data protection, and ensuring backup and recovery systems are properly implemented. Given the complementary nature of security threats, and needing to easily backup and recover if / when an attack occurs, it seems logical that these two groups would collaborate closely. But, that’s often not the case. In fact, we found this split between the two roles to be worryingly prevalent in our research. 19% of UK SecOps decision-makers responding to our survey believe collaboration with IT is not strong, and 5% went as far as to call it “weak.” Flipping the coin, among IT decision-makers, 16% believe collaboration is not strong. Across the two roles, in total, 20% of IT and SecOps respondents believe the collaboration between the two is not strong.


How ‘synthetic media’ will transform business forever

The world is on the brink of a revolution in various “realities” — virtual, augmented, and mixed (VR, AR and MR). Mark Zuckerberg’s Meta is planning a grand pivot from old-school social networking to next-generation virtual or mixed reality, which Zuckerberg famously branded “the Metaverse.” By definition, AR, VR, and MR involve digital content, either existing in a digital world (VR) or superimposed on the real world (AR). Very close to all of this content will come in the form of synthetic media. In fact, Meta has already introduced a new AI-powered synthetic media engine, called Make-A-Video. As with the new generation of AI-art engines, Make-A-Video uses text prompts to create videos. Meta is currently promoting this engine as a very fast way for creators to create video content or virtual environments. Normally, a company making, say, marketing content would need to hire a production crew, pay for post-production work, hire actors, find a location — all that. But products like Make-A-Video suggest that, in the near future, a single creative could make video productions alone in a few hours.


Low-code and no-code automation accelerates user experience for financial institutions

Low-code and no-code automations help the business users, non-engineers and non-developers solve the end-to-end customer experience problems easily and quickly by creating the use cases themselves. The modern technology stack of low-code automation tools enable the citizen developers to solve their problems and replicate to scale across other businesses without having to rely on their overburdened IT staff. One example of low-code automation is how many traditional systems are focused on identity verification or KYC, whereas, most of the frauds for a loan financing may be happening through a fraud paystub of the consumer. As pay stubs are the correlators for whether a consumer will pay back their loans successfully on time, it is important to apply AI/ML automation to understand the validity of this important correlator to avoid defaults. Understanding a fraudulent transaction versus a clean transaction quickly and correctly is also very beneficial for the merchants as well as end customers for better user experience and revenue generation.

One of the challenges of developer platforms is that they shouldn’t be viewed as things that can just be built, launched, and then forgotten about; they require constant evolution and maintenance. The feedback loops initiated by a considered communication strategy will help here, but it’s also important to consider the ways the platform evolves alongside the organization and emerging technologies. A technique included for the first time in Volume 27 of our Technology Radar—incremental developer platforms—can be particularly useful as a way of responding to these multifaceted demands. Such an approach not only ensures alignment with the specific needs of users, but also prevents the platform from being derailed by over-ambition—something that typically stems from a preconceived vision of what the platform should look like. The virtues of an incremental approach to software are widely accepted by the industry, so why don’t we bring this thinking to the way we think about platforms and internal tooling?


Question your successes as much as your failures

What did you do to pull it off? Can it be replicated? What did you learn? It’s an exercise that many leaders I’ve interviewed talked up even before the pandemic. “I’ve learned to question success a lot more than failure,” said Kat Cole, who is the president and chief operating officer of Athletic Greens, a nutrition company. As she told me in an interview years ago, “I’ll ask more questions when sales are up than I do when they’re down. I ask more questions when things seem to be moving smoothly, because I’m thinking: ‘There’s got to be something I don’t know. There’s always something.’ This approach means that people don’t feel beat up for failing, but they should feel very concerned if they don’t understand why they’re successful.” Successes can feel like moments for celebration, rather than furrowed-brow scrutiny. But it is precisely those moments of interrogation that can lead to insights and longer-term competitive advantages. Today’s shorter economic cycles create more momentum, both good and bad, and you want to be riding a wave rather than trying to paddle against it.


What is confidential computing?

There are as many ways confidential computing can work as there are companies coding them, but recall the definition noted above. Google Cloud uses confidential virtual machines with secure encrypted virtualization extension supported by 3rd Gen AMD EPYC CPUs and cloud computing cloud processes. Data remains encrypted in memory with node-specific, dedicated keys that are generated and managed by the processor, which security keys generated within the hardware during node creation. From there, they never leave that hardware. Today, IBM claims to be on the fourth generation of their confidential computing products, starting with IBM Cloud’s Hyper Protect Services and Data Shield in 2018. In pride of place with Hyper Protect services comes a FIPS 140-2 Level 4 certified cloud hardware security module. Both products are rated for regulations such as HIPAA, GDPR, ISO 27K and more. IBM also offers HPC Cluster, a portion of IBM cloud where customers’ clusters are made confidential using “bring your own encrypted operating system” and “keep your own key” capabilities. 


The security dilemma of data sprawl

Security teams can look at specific verticals to understand what’s working (and what’s not) when it comes to limiting data sprawl in the workplace. The finance sector, for instance, is a prime example of an industry that has more stringent security controls and regulations, therefore limiting apps in the workplace. The Netskope Threat Labs team recently found that fewer than 1 in 10 employees in finance use personal applications at work. Instead, they use managed apps that are closely monitored by security teams. Other sectors are having a more difficult time limiting data sprawl, given the remote nature of the business and less stringent industry regulations. Retail employees, for example, are using a slew of cloud apps in the workplace regularly. In fact, 40% of users in the retail industry are uploading data to personal apps. It is crucial for IT security teams, not just in this sector but across all industries, to take proactive measures to help minimize the risk of data sprawl.



Quote for the day:

"The quality of leadership, more than any other single factor, determines the success or failure of an organization." -- Fred Fiedler and Martin Chemers

Daily Tech Digest - November 07, 2022

Introduction to SPIFFE/SPIRE

The Secure Production Identity Framework For Everyone (SPIFFE) is a specification for workload identity. According to Gilman, the easiest way to think about SPIFFE is as a passport. Similar to how people are issued passports in a common shape with a barcode and standard information, SPIFFE dictates the standard methods to prove and validate the identity of a service. It’s like bringing the “Sign in with Google” experience to the software services themselves, he adds. There are three key components in SPIFFE. First, SPIFFE specifies that services shall identify themselves with what’s called a SPIFFE ID, which is defined as a URI in the format of spiffe://trust-domain-name/path. These IDs are then encoded into a SPIFFE Verifiable Identity Document or SVID. SVIDs aren’t so much a document type themselves — instead, they support either X.509 or JWT document types. Last but not least, SPIFFE specifies a workload API that issues and rotates these SVIDs, along with the keys needed to validate them. SPIRE is the code that implements the SPIFFE specification—you can think of it as a production-ready SPIFFE runtime environment. 


C-Suite Businesses are placing a high priority on cybersecurity

The C-suite now frequently discusses cybersecurity in boardroom discussions. IT and business leaders have historically had difficulty cooperating on cyber risk management, but this disagreement seems to be worse than ever right now. According to a study, over 90% of IT decision-makers think their organisation would be willing to forego cybersecurity in favour of other objectives. Such a strategy for short-term gains are not worth the risk regarding cybersecurity, which includes monetary losses and reputational harm. An organisation must resolve this business-IT conflict and come to a consensus on cyber risk as a crucial component of business risk in order to succeed in the post-pandemic era of hybrid or remote workforces. Organisations will be able to maximise their commercial opportunities and prevent pricey breaches by using this to better identify, communicate, and mitigate cyber risk across the workplace. Additionally, research shows that 38% of business decision-makers and 50% of IT leaders believe the C-Suite fully comprehends cyber dangers. 


Disadvantages of industrial IoT

Using IIoT creates massive amounts of data. That wouldn’t matter, were it not for the fact that this information needs to be processed quickly in order to be of any use. Especially when applied to digital operations, data processing is key to success. Additionally, all this generated information brings matters of privacy and security into question. IoT itself is a relatively new concept, and protecting the data that it collects will require companies to find different and more efficient ways to sort through digital assets. At the very least, businesses operating with IIoT technology should be sure to invest in secure cloud computing infrastructure. Without strong digital assets, IIoT implementation will become even more complicated and risky than it already is. ... Transitioning to IIoT is costly. Regardless of the need for new systems, as mentioned above, current IoT expenses are already high. This is because IIoT uses sophisticated software to analyze productivity and predict future trends and issues. It is also capable of deploying smart-sensing software for use in technology and agricultural businesses. Combined with the network that IIoT provides to companies, the expense of developing a digital strategy can be hefty.


10 future trends for working with business leaders

CEOs of the world’s largest companies tell IDC that they already make around 30% of their revenue from digital products, and they expect that proportion to grow in the years to come. IDC identifies three dimensions along which enterprises can achieve this growth. First, they can exploit new channels: e-commerce, mobile apps, or the creation of new distribution paths such as enabling the circular economy. Second, they can adopt additional revenue models: pay-per-use, subscriptions, dynamic pricing, transaction fees, or payment for outcomes. And third, they can seek to monetize new digital assets: data, intellectual property, or virtual objects. Developing such new revenue streams requires that CIOs keep pressing ahead with digital spending. “If you pause, you’re already behind,” he says. Building new products may involve skills that CIOs don’t yet have on their roster. “You have to have the right mix of in-house and partners that can enable quicker development,” says Powers.


How to prepare for a SOC 2 audit – it’s a big deal, so you’d better get ready

“Companies tend to write their controls down and never look at them again, so preparing for the audit is an appropriate time to look at and update them if they don’t reflect what you’re doing,” says Paul Perry, a member of the Emerging Trends Working Group with the governance group ISACA and the Security, Risk and Controls Practice Leader with accounting and advisory firm Warren Averett. Auditors want to see well-documented policies, but they also want to see them in action to verify that organizations are doing in day-to-day practice what those policies say they should be doing. For example, software engineers may be testing code, but they need to do so in a manner that follows the process and documentation requirements outlined in the organization’s policies. That’s the kind of action auditors will want to see, Yawn says. Review security and privacy controls to ensure they’re aligned with the organization’s own security and privacy policies as well as regulatory requirements and industry best practices.


Does data need to come from a single source of truth?

Using potentially flawed data in the decision-making process not only leads to incorrect decision-making, but can have a negative impact on future data operations. If there isn’t real clarity about where the source of the data is, what it’s quality is and what it really means, how can employees really trust that data? And if they can’t trust it, the consequences can be serious, with executives developing a negative view of data-driven decision making and underinvesting in future data projects. It’s a vicious data circle that can end in a business not fully realising the true value from arguably its most important asset. It is crucial, therefore, that data is trusted and accurate, but ensuring data is reliable across multiple different sources is another challenge entirely The key is giving employees a single pane of glass through which to see all of the available data. This not only provides a single point of reference for employees that allows them to search for data on a reliable platform, but also gives them access to data from a wide range of different sources such as CRM or ERP systems.


How Chipmakers Are Implementing Confidential Computing

"Everybody wants to continue to reduce the attack surface of data," says Justin Boitano, vice president and general manager of Nvidia's enterprise and edge computing operations. "Up to this point, it is obviously encrypted in transit and at rest. Confidential computing solves the encrypted in-use at the infrastructure level." Nvidia is taking a divergent approach to confidential computing with Morpheus, which uses artificial intelligence (AI) to keep computer systems secure. For example, Morpheus identifies suspicious user behavior by using AI techniques to inspect network packets for sensitive data. "Security analysts can go and fix the security policies before it becomes a problem," Boitano says. "From there, we also realize the big challenges — you have to kind of assume that people are already in your network, so you have also got to look at the behavior of users and machines on the network." Nvidia is also using Morpheus to establish security priorities for analysts tracking system threats.


Memory-Based Cyberattacks Become More Complex, Difficult To Detect

There are two broad classifications of memory attacks. The first involves attacks on storage devices that are used to boot or load an operating system or software for a machine. Greenberg said that often, but not always, these require physical access to the machine to mount an effective attack on the storage, although an already compromised machine may further corrupt the storage such that the machine remains permanently compromised until it is completely erased and restarted. Encryption can help protect these storage devices. The second involves RAM devices that store temporary data. These devices are more likely to be attacked through the machine itself, including through internet-connected attacks. Physical attacks on RAM are also a possibility. Most systems’ security comes from physical security combined with built-in memory protection and run-time security provided through the system. “But as new ways of exploiting cybersecurity weaknesses are discovered over time, more advanced memory types tend to contain mitigating features for those methods,” Greenberg said.


The new CIO security priority: Your software supply chain

Whether it’s components, APIs, or serverless functions, most organizations underestimate what they’re using by an order of magnitude unless they run routine inventories, Worthington points out. “They find out that some of these APIs aren’t using proper authentication methods or are maybe not written in a way that they expected them to be and maybe some of them are even deprecated,” she says. Beyond vulnerabilities, evaluating the community support behind a package is as important as understanding what the code does because not all maintainers want the burden of having their code treated as a critical resource. “Not all open source is made the same,” she warns. “Open source may be free to download but certainly the use of it is not free. Your use of it means that you as are responsible for understanding the security posture behind it, because it’s in your supply chain. You need to contribute back to it. Your developers need to participate in fixing vulnerabilities,” says Worthington, who suggests organizations should also be prepared to contribute monetarily, either directly to open-source projects or to initiatives that support them with resources and funds.


The Future of DevOps Is No-Code

For DevOps, the starting point for upskilling is to train non-DevOps personnel to become effective members of the DevOps team. And this is where no-code and low-code DevOps tools come in. With no-code and low-code tools, even complete development novices can learn to build websites and applications. If someone has enough computer knowledge to drag-and-drop, they can probably learn no-code tools. And those with a little more computer savvy can put low-code tools to good use. As their name suggests, no-code and low-code tools facilitate software and application development with minimal need for writing or understanding code. Instead of building code, developers rely on visual, drag-and-drop processes to piece together pre-made functionality. So instead of needing to understand the intricacies of specific programming languages, developers need only have a good feel for the business’s needs, the overall application architecture, and the application’s workflows.



Quote for the day:

"You don't have to hold a position in order to be a leader." -- Anthony J. D'Angelo

Daily Tech Digest - November 06, 2022

Best Practices for Enterprise Application Architecture

The iterative approach is a more practical method of building enterprise application architectures, where you start small and build out your architecture in small, incremental steps. This approach is particularly useful for enterprises with limited resources and can’t afford to build a full-scale architecture from scratch. Instead of starting with a full-scale architecture, design and implement a series of smaller “proof-of-concept” applications that prove the feasibility of your ideas. Once these applications are ready, you can scale them into an enterprise-level solution. ... The Agile adoption process is a critical step for any enterprise application, and the implementation of agile methodology can be daunting for organizations that have not done it before. However, there are many benefits to adopting a more agile development approach, which includes delivering software faster and at less cost. ... EA governance is the process involved in managing and maintaining an EA. This includes identifying and defining an EA’s goals, objectives, and key performance indicators (KPIs). It also involves establishing a governance framework that supports the EA’s development, management, and maintenance.


Technology leader shall be open to accepting changes

As systems evolve over time, so does their complexity. Maintaining such systems or components will always be a challenge. While people will be on the move, there shall be an inventory of all the systems and components, along with actively maintained documentation, which many organisations don’t adhere to. Maintainability should be considered as a key consideration while designing and building systems. ... Technology leaders must demonstrate consistent delivery of high-quality services, which necessitates the implementation of appropriate systems and processes. Simultaneously, such systems and processes must not be a barrier to adapting to a changing business and technology ecosystem. ... Like GDPR, many countries are coming up with regulations for data privacy and cyber security requirements. Coping with such demands is necessary, but it is difficult because it is complex, dynamic, and ever-changing. To add to that, establishing a return on investment for security solutions is very challenging.


7 hard truths of business-IT alignment

Some CIOs treat IT-business alignment as their own responsibility. That’s a mistake, experts say. “The leadership team below the CIO also needs to be customer facing. It needs to be able to help solve problems. It shouldn’t just be going away and writing code,” Pettinato says. “To make it scalable, you need to take it beyond just one individual.” “I clearly can’t, myself, be involved in every organizational conversation,” Barchi says. “That is where trusting your team helps. There are many leaders on my team who are in these meetings day in and day out, solving problems in real time.” In fact, he says, creating a team that’s capable of doing this is the most important part of his job. “As I’ve grown as a leader, I’ve recognized that my contribution is not my own technical skill and my ability to make decisions. It’s my ability to create a team that can do all of that,” Barchi says. “I think CIOs do well when we know that our job is not to be involved in every technology decision — it’s to create the environment where that can happen and create a team that can do that.”


Teaching is complicated. But technology in the classroom doesn't have to be

When the pandemic forced schoolchildren to learn from home and adapt to digital learning, educators lost their students' attention, and learning suffered. But once schools opened up, digital learning didn't disappear; for many, it became the norm. Seage says technology should never be the driver of the classroom. "The technology has to complement what you do. It complements all different teaching styles," he says. As a former student, Seage recalls the difficulty teachers faced in finding novel ways to engage students and wanted to offer a solution. Interactive whiteboards offer a low learning curve for teachers and students while also promoting interaction and collaboration, he says. In the school's gymnasium, for instance, boards serve as an enhanced coaching tool, allowing coaches and players to re-watch game footage during practice, or strategize game plays for future matches. Micah Shippee, director of education technology consulting and solutions at Samsung, is a former educator who now works with schools to adopt Samsung technology.


How to Choose the Best Software Architecture for Your Enterprise App?

Patterns in architecture are ways to fix common design problems that can be used repeatedly. Their framework makes it easier to reuse code and keeps apps running smoothly for longer. In addition to being scalable, flexible, and easy to keep up with, the software must be able to handle a wide range of requests without any problems. But making software hard to use could go against these goals because it makes it less likely that people will use the software and use it well. Because of this, the software needs to be very flexible to be changed to meet the needs of each user. ... An event producer and a consumer are the two most important parts of an EDA system. A producer is someone who knows how to put on an event. Put another way, it is up to the person watching the possibility to pay attention to what is happening. Event-driven architecture (EDA) is a way of making software that relies on events to send messages between modules. It breaks applications into small pieces called modules that can run on their own and share data with a small number of other modules using standard protocols. 


Is there a cyber conflict happening behind the scenes?

There’s a global digital dependency happening right now, accelerated even further by the pandemic driving a need for remote services in nearly every industry. While this adaptation is an overall benefit to progressive societies, it opens new and innovative ways for cyber attackers to target organizations and consumers alike. Even those who aren’t connected are inadvertently impacted by the digital world and cyberattacks, which has people around the world asking: is there a cyber battle going on? ... At the beginning of the Russian-Ukrainian conflict earlier this year, Russians attacked a satellite provider in Ukraine, affecting countries including Germany and France and bricking edge devices across the continent. This affected both civilian and military communication, hindering war efforts on the Ukraine side and evacuation efforts for fleeing citizens. These attacks aren’t just being carried out by high-level nation-state actors, they’re also being carried out by hacktivists and volunteers. Even simple distributed denial-of-service (DDoS) attacks can generate damage with the right amount of devices. 


IT Ops 4.0: Operations Architecture For The Industry Automation Age

A structured communication effort is essential to gaining support and motivating employees. Every organization has its way of getting started and doing this, but most steps involve at least three elements. Engage people in alignment with the vision. It’s essential to be transparent about where we are (or where we started), where we’re heading, and how we’re getting there as a team. This is to demonstrate the value of transformation, both for the organization and its employees.It is essential to experience problems, challenges, or innovative approaches. Employees gain a better understanding by learning from the companies leading the way. Tours enable employees to think outside the box, hear stories of change, and discuss the challenges that come with it.When people know that change has been around for a long time and understand why it is happening, they tend to want to learn as much as possible about it. Employers need to gain momentum and make it as easy as possible to provide access to information and resources on new technologies and approaches.


Why Wasm is the future of cloud computing

Wasm is already very capable. And with the new technologies and standards that are on the way, Wasm will let you do even more. ... WASI will provide a standard set of APIs and services that can be used when Wasm modules are running on the server. Many standard proposals are still in progress, such as garbage collection, network I/O, and threading, so you can’t always map the things that you’re doing in other programming languages to Wasm. But eventually, WASI aims to provide a full standard that will help to achieve that. In many ways, the goals of WASI are similar to POSIX. Wasm as it now stands also doesn’t address the ability to link or communicate with other Wasm modules. But the Wasm community, with support from members of the computing industry, is working on the creation of something called the component model. This aims to create a dynamic linking infrastructure around Wasm modules, defining how components start up and communicate with each other (similar to a traditional OS’s process model).


Making the case for security operation automation

Security teams must be able to scale operations to deal with the increasing volume of everything coming at them. Faced with a global cybersecurity skills shortage, CISOs need alternatives to hiring their way out of this quagmire. ... When it comes to security operations process automation, one might equate this activity with security orchestration, automation, and response (SOAR) technology. In some cases, this is a correct assumption, as 37% of organizations use some type of commercial SOAR tools. Interestingly, more than half (53%) of organizations eschew SOAR, using security operations process automation functionality within other security technologies instead – security information and event management (SIEM), threat intelligence platforms (TIPs), IT operations tools, or extended detection and response (XDR), for example. Those organizations using SOAR admit that it is no day at the beach – 80% agree that using SOAR was more complex and time consuming than they anticipated. Technology aside, security professionals acknowledge that there are a few major impediments to security operations process automation. 


Gender has no bearing on your abilities in tech industry

According to statistical information, there is clearly not an equal representation of women in technology. ... The first is stereotyping, conscious & unconscious biases, which occur when people believe that being a woman, may have a negative impact on performance, level of intelligence or aptitude. I believe it began when women were not encouraged to pursue STEM courses – Science, Technology, Engineering and Math. Nowadays, there are concerted efforts and interventions to encourage women to pursue STEM degrees. Secondly, there aren’t enough role models, advocates, and people who are challenging the status quo. Although overall, things have improved significantly in recent years. I do not recall knowing any Nigerian woman in Data Analytics or Business Intelligence when I started my career. I never met them or heard about them. I seriously doubt they existed at the time, which says a lot. The lack of role models at the time was a major factor, but I am glad things are improving now.



Quote for the day:

"Setting an example is not the main means of influencing others, it is the only means." -- Albert Einstein

Daily Tech Digest - November 05, 2022

The tight connection between data governance and observability

Although data governance helps in establishing the right set of data management policies and procedures, current data stacks are growing beyond boundaries. With data sets now scaling with more data sources, more tables, and more complexity, there is a pressing need to maintain a constant pulse on the health of these systems. Since any amount of downtime can lead to partial, erroneous, missing, or otherwise inaccurate data, organizations need to do better than just implementing a handful of policies. Data observability enables organizations to cater to these increasingly complex data systems and support an endless ecosystem of data sources and formats. By providing a real-time view of the health and state of data across the enterprise, it empowers them to identify and resolve issues and go far beyond just describing the problem. Observability provides much-needed context to the issue, paving the way for a quick resolution while also ensuring it doesn’t transpire again.


Data Ethics: New Frontiers in Data Governance

Navigating that crucial difference is rarely cut and dried even in simple, day-to-day personal interactions. Still, within the world of data, ethical questions can quickly take on multiple dimensions and present challenges unique to the field. Assessing data ethics can be decidedly confusing, for as Lopez pointed out, “Not all things that are bad for data are actually bad for the world … and vice versa.” Whereas the ethical actions and judgments that we make as private individuals tend to play out within a limited set of factors, the implications of even the most innocuous events within large-scale Data Management can be huge. Company data exists in “space,” potentially flowing between departments and projects, but privacy agreements and other safeguards that apply for some purposes may not apply for others. Data from spreadsheets authored for in-house analytics, for example, might violate a client privacy agreement if it migrates to open cloud storage. 


Distributed ledger technology and the future of insurance

The rise of crypto itself also opens up new and lucrative opportunities for insurers. Not only are we seeing an upward trajectory in consumer adoption of crypto (which jumped worldwide by over 800% in 2021 alone), but there is also significant momentum among institutional investors such as hedge funds and pension funds. This is in part due to recent regulatory and legal clarifications (you can read my reflections on the recent MiCA regulation here), but also the unabated enthusiasm of end investors for this new asset class. Another key accelerator is the growing acceptance of ‘proof of stake’ (in opposition to ‘proof of work’) as the primary consensus mechanism to validate transactions on the blockchain. Critically, proof of stake is far less energy-intensive than its counterpart (by about 99%), and overcomes critical limits on network capacity needed to drive institutional adoption. Ethereum’s transition from proof of work to proof of stake in September of this year was a watershed moment for the industry.As a result, banks are looking to meet institutional demand by launching their own crypto custody solutions. 


Digital transformation: 3 pieces of contrarian advice

The contrarian advice, which is now starting to enter the mainstream, is that it’s time to fully embrace the hybrid cloud. Companies are learning that while public cloud still has many benefits, the cost over time adds up. As an organization grows, there are usually opportunities to do at least some of the workload in the private cloud to gain benefits in locality, data transfer, and flexibility of in-house customizations. Other considerations of the private cloud include various compliance, privacy, and security advantages. The hybrid cloud can offer “best of both worlds” benefits, such as edge computing and more effective paths to advanced technologies such as artificial intelligence and machine learning (AI/ML). The “best of both worlds” effect comes into play when taking advantage of APIs and solutions that are open source and based on open standards. One example of this is running a workload that is simple to run in your private cloud and only has speedup when run on specialized hardware such as a supercomputer, quantum computer, or AI/ML (or other workload-specific) hardware.


EU Cyber Resilience Act

The CRA applies to hardware and software that contain digital components and whose intended use includes a connection to a device or network and applies to all digital products placed on the EU market (including imported products). ... Manufacturers will need to assess the cyber risk of their digital hardware and software and take continued action to fix problems during the lifetime of the product. In addition, before placing any digital product on the market, manufacturers will be required to conduct a formal ‘conformity assessment’ of such product and implement appropriate policies and procedures documenting relevant cybersecurity aspects of the products. Companies will have to notify the EU cybersecurity agency (ENISA) of any exploited vulnerability within the product, and any incident impacting product security, within 24 hours of becoming aware. Manufacturers will also be required to notify users of any incident impacting product security without delay.


How DevOps Helps With Secure Deployments

The goal of DevSecOps is to provide security best practices in a way that doesn’t disrupt team productivity. Secure development is the key to a smooth deployment process. It’s frustrating when you have security in your product but don’t see it being implemented or taken seriously. DevSecOps brings back that focus by reminding everyone just how important and necessary good hygiene practices are for both developers and operations. For security to be more effective and reliable, it should be incorporated from the very beginning. This means that instead of waiting until there is an issue or crisis before implementing measures for protection such as firewalls, encryption keys, etc.; you want your developers working on it up front so they can ensure everything will work well together later. It helps ensure security issues are found as early in the process as possible so it is close to decision-makers. It’s much easier (and less painful) when security issues can be fixed while you still remember what happened with your project—it’s like having an extra set of eyes on the project.


Why enterprise architecture and IT are distinct

The reality is that enterprise architecture too often functions as a specialism within IT. However, one way to draw the distinction is by thinking about the differences between IT and enterprise architecture in terms of information flow. The ability to store and share ideas and information has, and always will be, at the heart of business actions – and it’s something which we now deeply associate with IT. All of the technological infrastructure, however, means nothing without the context in which it operates. Focusing purely on IT forgets the importance of users, as well as the employees and customers who generate, share and utilize the information. Information can’t be useful if it exists in a vacuum – it needs to connect and permeate a business in a dynamic way that’s bespoke to its ecosystem, people and situation. Enterprise architecture’s role is to enable information flow beyond just establishing the infrastructure. In considering and responding to the way in which systems interact with users and business processes, enterprise architecture is aligned to the long-term initiatives and stakeholders, as opposed to just the deployment of technology and tools alone.


How to connect hybrid knowledge workers to corporate culture

Managers must be able to articulate what the company culture is and translate company culture to daily team life, Gartner said. However, the Gartner survey of knowledge workers revealed that less than half of managers can effectively communicate why the broader organisational culture is important. “Teams and managers are the best mechanism for creating culture connectedness by enabling each team to create their own micro-culture while still supporting the organisation,” said Steele. “Organisations can double employee culture connectedness by embracing micro-cultures.” To help connect hybrid knowledge workers to company culture, managers should gauge employees’ understanding of the broader organisational values and their team’s specific norms and processes, she advised. Managers can then work together with their teams to translate what each value means in the context of their work, said Steele, adding that they can then create a list of behaviours that contribute to the culture and those that will derail it.


How Data Privacy Regulations Can Affect Your Business

An obvious impact of data regulations is that they reduce the amount of data a business can collect. Businesses collect and store data to help develop and improve their company, establishing a better understanding of their customer base and target audience. Unfortunately, the risk of storing large quantities of data can pose a significant risk in terms of cybercrime, requiring considerable resources to help protect IT systems. As a result, some businesses are choosing only to collect data that is critical to their operations, limiting the chances of a costly data breach. ... The risk management and compliance of businesses and any third parties involved are very important in the modern business climate. New regulations include many contractual safeguarding procedures, strict data protection, and evidence that compliance has been achieved. ... There have also been new data roles created within businesses in recent years, including those of internal privacy managers, chief data officers (CDOs), privacy executives, data protection officers, and data scientists. 


Accelerating SQL Queries on a Modern Real-Time Database

Modern databases have a cluster architecture spanning multiple nodes for scale, performance, and reliability. High density of fast storage is achieved through solid-state disks (SSDs). Hybrid memory architecture (HMA) stores indexes and data in dynamic random-access memory (DRAM), SSD, and other devices to provide cost-effective fast storage capacity. ... Disk (SSD) reads and writes are optimized for latency and throughput. Indexes play a key role in realizing fast access to data. This requires supporting secondary indexes on integer, string, geospatial, map, and list columns. ... The thread architecture on a cluster node is optimized to exploit parallelism of multicore processors of modern hardware, and also to minimize conflict and maximize throughput. The data is distributed uniformly across all nodes to maximize parallelism and throughput. The client library connects directly to individual cluster nodes and processes a request in a single hop, by distributing the request to nodes where it is processed in parallel, and assembling the results.



Quote for the day:

"You must stand firm if you wish to lead the firm" -- Constance Chuks Friday

Daily Tech Digest - November 04, 2022

Can today’s videoconferencing tech evolve into tomorrow’s metaverse?

The market continues to reject headset options for virtual reality (VR), with the largest recent failure being 3D TV, which had relatively light and inexpensive headsets compared to other augmented and VR solutions. There are two ways to approach this, and they aren’t mutually exclusive. One is to eliminate the headset and use a different technology such as “hard light” or LED walls. Another, more likely near-term path is to create headsets that have far broader applicability than current headsets do. This means making them more attractive to wear and providing a compelling secondary use (such as watching video entertainment, privacy and security, and safety). If I want to use a headset because it does something I want, while also being useful for videoconferencing, I’m more likely to try it for collaboration. Right now, despite the hype, the metaverse isn’t real enough to be compelling. And headsets are tied tightly to VR experiences that aren’t going to drive their use en masse. This leads to an imbalance between cost, appearance, and utility.


Fill the cybersecurity talent gap with inquisitive job candidates

Curiosity is also critical when entering the cybersecurity field. Especially for those coming from an atypical background, curiosity can lead to the discovery of solutions that may have otherwise been overlooked. It can help them figure out how hackers think and behave, and influence proactive defense strategies after being able to step into their shoes. Curious minds can further lead to the discovery of additional interests within the many facets of the field, making those individuals more well-rounded cybersecurity professionals. ... Another important quality hiring teams can look for in potential cybersecurity candidates is a strong willingness to learn. This encompasses both tenacity and curiosity: Those who are determined and interested in discovering new information are consistently willing and ready to face new challenges. Cybersecurity can be complex and multifaceted, and those who can be patient and take the time to learn the breadth and depth of the field can be successful in unique ways.


Looking for a remote work job? It's getting harder to find one

"In many ways, employees still hold the power to demand more from their employers when it comes to salary, flexibility and benefits. But this power balance is likely to start levelling out in the coming months," she said. Employees and jobseekers are also bracing for an economic slump, with LinkedIn finding that candidates' confidence in their ability to improve their financial situation has "decreased or remains low" compared with August 2022. Guy Berger, principal economist at LinkedIn, said that while employers could not eliminate uncertainty in the year ahead, they could at least "mitigate it" for employees by putting more effort into supporting employee morale. "Consider relatively low-cost, high-value benefits that you might have overlooked before," said Berger. "Don't underestimate the calm that can follow when you reassure employees that you hear them, and that times aren't tough forever." Indeed, salary isn't the only thing employees care about in their careers: work-life balance, flexible-working arrangements and upskilling all rank highly, LinkedIn found.


Solving the Culture Conundrum in Software Engineering

The role of the software engineer has changed; it is no longer about writing code in isolation without much regard for or knowledge of how it benefits the business. Developers work better when they have clarity about the direct impact their work will have on achieving business goals and on the bottom line. It’s down to business leaders to communicate these challenges and goals (in other words, understand the “why”) to help software developers understand what they’re trying to achieve. But doing so in a way that moves towards a better working culture requires a new approach to building and managing software development teams. The first step is casting aside the negative stereotypes many have of software engineers and celebrating the intellectual and cultural diversity within their teams. Diversity of personnel brings a diversity of personalities, which is crucial to creating more inclusive cultures that accept and welcome all characters with open arms. While this may seem obvious, what is often overlooked is the impact diversity can have on stimulating and increasing innovation.


8 bad communication habits to break in IT

“IT leaders are incredibly talented and well-versed in what they do. They know the problem and solution well, and they’re often eager to point out the features and functionality when fielding questions from the end user. “However, when addressing questions from the business, IT leaders need to take a step back to ensure they’re answering the correct question for the right audience. Failing to do so can lead to confusion and credibility gaps. When you mix technology and business professionals, the way you answer questions may need to shift. Make sure you understand in detail what’s being asked and determine how to answer the question in a way that will make the most sense to them. The more IT leaders listen carefully and ask clarifying questions when needed, the better they’ll become at communicating.” ... “As a profession, technologists don’t have a strong reputation as great listeners. We have a bad habit of hearing and immediately responding, which makes it seem like we’re not listening. “We teach IT professionals the H.E.A.R. model: hear, empathize, analyze, respond. This is especially important when we need to have a difficult conversation, like addressing an idea that isn’t practical ... "


Startups Scratch the Surface of AGI Without Really Understanding It

Researcher and author Gary Marcus has often pointed out how contemporary AI’s dependence on deep learning is flawed due to this gap. While machines can now recognise patterns in data, this understanding of the data is largely superficial and not conceptual—making the results difficult to determine. Marcus has said that this has created a vicious cycle where companies are caught in a trap to pursue benchmarks instead of the foundational ideas of intelligence. This search for clarity pushed a lot of interest into interpretability and the money followed later. Until a couple of years ago, explainable AI witnessed its time in the spotlight. There was a wave of core AI startups like Kyndi, Fiddler Labs and DataRobot that integrated explainable AI within them. Explainable AI started gaining traction among VCs, with firms like UL Ventures, Intel Capital, Light Speed and Greylock seen actively investing in it. A report by Gartner stated that “30% of government and large enterprise contracts will require XAI solutions by 2025”.


Is an Outsourced DPO Function the Answer?

Some DPO duties lend themselves to being carried out by a third party outside the business, such as the volume tasks mentioned above, but for others it will be more appropriate to carry them out in-house. For example, effective data mapping requires an intricate knowledge of the company’s day-to-day business processes that may be difficult to communicate to a third-party provider. Likewise, an internal DPO may find it easier to monitor the company’s ongoing data protection compliance, given their involvement in the organisation’s operations. Businesses may therefore wish to consider a hybrid approach, whereby some DPO functions are contracted to an external provider while certain duties are fulfilled within the organisation. Which processes are outsourced and which processes remain internal will depend on the specific processing activities carried out and where internal capabilities and strengths lie. Experts could also be engaged to work with a business to create an internal privacy framework which is then applied uniformly both internally by staff, and externally by an outsourced DPO function.


How Apiiro leverages application security for the software supply chain

Because cybercriminals look to exploit any vulnerabilities they can find in an organization’s application stacks, both security teams and developers need to be extremely proactive at pinpointing and remediating vulnerable applications and code throughout the software supply chain. Apiiro aims to do this by enabling developers to discover every API, service and artifact to create a software bill of materials (SBOM), as well as to identify exposed secrets, AOPI and OSS vulnerabilities and misconfigurations that increase risk. “The unrelenting demand for next-generation application security solutions has allowed us to deploy our product at scale with leading Fortune 500 customers,” said Idan Plotnik, cofounder and CEO of Apiiro. “Early innovation enables us to grow faster and more efficiently than the competition, and we are building the company for hyper-growth. The combination of our team, business momentum, and support from top-tier investors positions Apiiro to continue to lead a growing industry.”


Key Basic Principles to Secure Kubernetes’ Future

While Kubernetes is designed to be secure, only responding to requests that it can authenticate and authorize, it also gives developers bespoke configuration options, meaning it is only as secure as the role-based access control (RBAC) policies that developers configure. Kubernetes also uses what’s known as a “flat network” that enables groups of containers (or pods) to communicate with other containers by default. This raises security concerns as, in theory, attackers who compromise a pod can access other resources in the same cluster. Despite this complexity, the solution to mitigate this risk is fairly straightforward: a zero trust strategy. With such a large attack surface, a fairly open network design, and workloads sitting across different environments, a zero trust architecture, one that never trusts and always verifies, is crucial when building with Kubernetes. ... All internal requests are considered suspicious, and authentication is required from top to bottom. This strategy helps mitigate risk by assuming threats exist on the network at all times, and so strict security procedures are constantly maintained around every user, device and connection.


Stemming the Security Challenges Posed by SaaS Sprawl

Corey O’Connor, director of products at DoControl, a provider of automated SaaS security, notes that remote and hybrid working models made a significant impact on both SaaS utilization and sprawl. “When they started to gain traction, CIOs responded by allowing the business to use whatever tools necessary to enable the business,” he explains. “This challenged CISOs as well as IT and security teams given the surge in SaaS adoption and utilization.” This created security gaps that needed to be addressed as organizations began to navigate the “new normal” for working environments. “With the workforce now more in a decentralized nature, there's a critical need to centralize security throughout all the disparate SaaS applications meant to drive business enablement,” O'Connor says. Ofek agrees, noting as more organizations adopt hybrid work models, security and IT teams will need to devise new processes, policies, and controls around SaaS applications to allow for secure but easy access--and it starts with visibility.



Quote for the day:

"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld