Daily Tech Digest - October 09, 2022

7 ways to foster IT innovation

Asking your team to be innovative is like asking an athlete to play better. While it may feel motivational and instructive to say it, it’s most often taken as disapproving and vague to the person receiving it. So if you want people to innovate, define specifically what you’re looking for them to do. Think specificity. My definition of IT innovation: The successful creation, implementation, enhancement, or improvement of a technical process, business process, software product, hardware product, or cultural factor that reduces costs, enhances productivity, increases organizational competitiveness, or provides other business value. ... Building an innovative culture is not only people-oriented but process-oriented. You must develop a formalized process that identifies, collects, evaluates and implements innovative ideas. Without this process, great ideas and potential innovations die on the vine. There also has to be an appreciation and understanding that innovative ideas can come from many directions, including your employees, internal business partners, customers, vendors, competitors, or through accidental discovery.


Shift Left Approach for API Standardization

Having clear and consistent API design standards is the foundation for a good developer and consumer experience. They let developers and consumers understand your APIs in a fast and effective manner, reduces the learning curve, and enables them to build to a set of guidelines. API standardization can also improve team collaboration, provide the guiding principles to reduce inaccuracies, delays, and contribute to a reduction in overall development costs. Standards are so important to the success of an API strategy that many technology companies – like Microsoft, Google, and IBM as well as industry organization like SWIFT, TMForum and IATA use and support the OpenAPI Specification (OAS) as their foundational standard for defining RESTful APIs. ... The term “shift left” refers to a practice in software development in which teams begin testing earlier than ever before and help them to focus on quality, work on problem prevention instead of detection. 


3 ways CIOs can empower their teams during uncertainty

By understanding data from different parts of the business, CIOs are in a unique position to see first-hand what efforts are producing the highest return. They can also identify gaps in knowledge and efficiency. Data analytics provide information used to set goals and expectations that allow the company to adapt in real-time as priorities change. As data stewards, CIOs will determine the origin of the most relevant data points and must be able to present these to other C-suite executives to help them make the best-informed choices. ... As the face of the IT department, CIOs can set the tone for a company’s culture, both inside and outside the building’s walls. They can articulate why new digital technologies are implemented and foster a forward-thinking environment. Additionally, they can connect the day-to-day actions of IT with their greater strategic vision. ... CIOs can help drive enterprise agility by always putting the customer at the center of decisions. The CIO can collaborate closely with business leaders to understand the business priorities and then develop a plan for how technology can drive the most value for the customer.


In defense of “quiet working”

Leaders need to raise their game and do their part to make work more engaging and crack down on bad managers who make life miserable for their teams. They need to more clearly articulate how people can contribute and what is expected of them. Companies need to rethink the “why” behind return-to-office policies, for example, so they don’t just feel like ham-handed directives based on a lack of trust in employee productivity. This issue of quiet quitting is fraught, and I want to be clear that there is a balance of shared responsibility here. Bad bosses give their employees plenty of reasons to throw up their hands and disengage. Companies need to make work more engaging beyond just coming up with lofty purpose statements. But let’s also give a shout-out to the value of a strong work ethic. A lot of companies are making progress and doing their part to try to figure out the new world of work. And so are the #quietworking employees. Green’s story captures a quality I’ve always admired in many people: they own their job, whatever it is.


Cancer Testing Lab Reports 2nd Major Breach Within 6 Months

The narrow time span between CSI's two major health data breaches will potentially raise red flags with regulators, says Greene, a former senior adviser at HHS OCR. HHS OCR will often look at what actions the entity took in response to the first data breach and whether the multiple breaches were due to a similar systematic failure, such as a failure to conduct an enterprisewide risk analysis," he says. While there are definite negatives involving major breaches being reported within a short time frame, there can also be a sliver of optimism related to the subsequent incident. ... "While multiple breaches may reflect widespread information security issues, I have also seen it occur for more positive reasons, such as an entity improving already-good audit practices and, as a result, detecting more cases of users abusing their access privileges." ... "We believe the access to a single employee mailbox occurred not to access patient information, but rather as part of an effort to commit financial fraud on other entities by redirecting CSI customer health care provider payments to an account posing as CSI using a fictitious email address," CSI says.


Ransomware: This is how half of attacks begin, and this is how you can stop them

While over half of ransomware incidents examined started with attackers exploiting internet-facing vulnerabilities, compromised credentials – usernames and passwords – were the entry point for 39% of incidents. There are several ways that usernames and passwords can be stolen, including phishing attacks or infecting users with information-stealing malware. It's also common for attackers to simply breach weak or common passwords with brute-force attacks. Other methods that cyber criminals have used as the initial entry point for ransomware attacks include malware infections, phishing, drive-by downloads, and exploiting network misconfigurations. No matter which method is used to initiate ransomware campaigns, the report warns that "ransomware remains a major threat and one that feeds on gaps in security control frameworks". Despite the challenges that can be associated with preparing for ransomware and other malicious cyber threats – especially in large enterprise environments – Secureworks researchers suggest that applying security patches is one of the key things organisations can do to help protect their networks.


Landmark US-UK Data Access Agreement Begins

“The Data Access Agreement will allow information and evidence that is held by service providers within each of our nations and relates to the prevention, detection, investigation or prosecution of serious crime to be accessed more quickly than ever before,” noted a joint statement penned between Washington and London. “This will help, for example, our law enforcement agencies gain more effective access to the evidence they need to bring offenders to justice, including terrorists and child abuse offenders, thereby preventing further victimization.” However, legal experts have also warned that any UK service providers responding to requests from US law enforcers would have to consider whether there was a “legal basis” for data transfers under the GDPR. Data flowing the other way would not be subject to the same concerns given the European Commission’s adequacy decision regarding the UK. That said, Cooley predicted that OPOs would still come under intense legal scrutiny.


What IT will look like in 2025

To succeed, both now and as the future unfolds, CIOs will need to synthesize a range of technologies cohesively to deliver experiences, functionalities, and services to employees, partners, and most definitely customers. “When you think about 2025, our teams will continue to focus on serving both customers, internal and external, and to find ways to make our business better on a daily basis,” says Richard A. Hook, executive vice president and CIO of Penske Automotive Group and CIO of Penske Corp. “In addition, our teams will continue to evolve their skills to ensure everyone has at least a security baseline of knowledge (deeper depending on roles), increased depth on various cloud platforms and configurations, and the skills necessary to build automation within IT and the business.” ... “We see that leaders increasingly recognize the next phase of new value will come from transformational efforts — seeking to change their business models, finding new forms of digitalized products and services, new ways to reach new customer segments, etc.,” says Gartner’s Tyler.


Understanding Kafka-on-Pulsar (KoP): Yesterday, Today, and Tomorrow

To provide a smoother migration experience for users, the KoP community came up with a new solution. They decided to bring the native Kafka protocol support to Pulsar by introducing a Kafka protocol handler on Pulsar brokers. Protocol handlers were a new feature introduced in Pulsar 2.5.0. They allow Pulsar brokers to support other messaging protocols, including Kafka, AMQP, and MQTT. Compared with the above-mentioned migration plans, KoP features the following key benefits:No Code Change: Users do not need to modify any code in their Kafka applications, including clients written in different languages, the applications themselves, and third-party components Great Compatibility: KoP is compatible with the majority of tools in the Kafka ecosystem. It currently supports Kafka 0.9+ Direct Interaction With Pulsar Brokers: Before KoP was designed, some users tried to make the Pulsar client serve the request sent by the Kafka client by creating a proxy layer in the middle. This might impact performance as it entailed additional routing requests. By comparison, KoP allows clients to directly communicate with Pulsar brokers without compromising performance


How to Adapt to the New World of Work

Burnout is a real and serious issue facing the workforce, with 43% of employees stating they are somewhat or very burnt out. Burnout is a combination of exhaustion, cynicism, and lack of purpose at work. This burden results in employees feeling worn out both physically and mentally, unable to bring their best to work. It often causes employees to take long leaves of absence in an attempt to recover and is a key driver of turnover, as they seek new roles that they hope will reinvigorate their passion and drive. Sources of burnout might include overwork, lack of necessary support or resources, or unfair treatment. Feedback tools can help find the root cause of burn out and how to mitigate them. Implementing wellness tools are another way to address this issue and demonstrate that the company prioritizes mental health. Employees whose organizations provide wellness tools are less likely to be extremely burnt out. Currently, only 26% of tech employees say their company provides wellbeing support tools. 



Quote for the day:

"Leadership is the wise use of power. Power is the capacity to translate intention into reality and sustain it " -- Warren Bennis

Daily Tech Digest - October 08, 2022

How to manage IT infrastructure in a fast-growing company: the DataRobot experience

With Jamf, we offered a new form of employee communication with IT through the IT Self-Service application. In fact, it is a portal for company employees to change the status quo in established business processes within the company. Our position: IT Self-Service is an employee’s first IT companion and the first line of IT help. The main idea of this service is to create conditions to reduce the load on the IT-team and reduce the number of open tickets to HelpDesk. This means more efficient use of the company’s IT resources. ... Since classical DevOps engineers were at the origin of the company’s IT onboarding process automation, the scenario of computer preparation for onboarding was implemented with the world’s most popular DevOps configuration management system, Ansible. It’s written in Python using the declarative markup language YAML. The approach was respectable because it solved the problem of preparing computers for both macOS/Ubuntu platforms with a platform-dependent branching of the deployment script. 


How to make your APIs more discoverable

API discoverability is a key aspect of any API management initiative. The discoverability of an API directly impacts its adoption and usage. A typical big enterprise with multiple development teams might build hundreds of APIs that they would want to reuse internally or share with partners that build complementary applications. If the teams are not able to discover existing APIs, they might build a new API with the same functionality. It might lead to a duplication of efforts and underutilization of the existing API. It is also an unscalable practice to contact the API developer each time someone wants to use the API. There needs to be a better and more hands-off way for internal teams and partners to discover and understand the usage of these APIs without directly contacting the developers who built them. API discoverability does not just mean making it easy to find an API by providing an inventory. It should also address some key aspects that are important for an API consumer, such as understanding the API through documentation, request and response format, sign-up options, and the business terms and conditions (in case of a partner) of using the API.


The long-term answer to fixing bias in AI systems

Some of these [long-term fix] recommendations are hard. For instance, one way these systems get biased is they're obviously being run by for-profit organizations. The usual players are Google, Facebook and Amazon. They are banking on their algorithms trying to optimize user engagement, which on the surface seems like a good idea. The problem is, people don't engage with things just because they are good or relevant. More often, they engage with things because the content has certain kinds of emotions, like fear or hatred, or certain kinds of conspiracy. Unfortunately, this focus on engagement is problematic. It's primarily because an average user engages with things that are often not verified, but are entertaining. The algorithms essentially end up learning that, OK, that's a good thing to do. This creates a vicious cycle. A longer-term solution is to start breaking the cycle. That needs to happen from both sides. It needs to happen from these services, the tech companies that are targeting for higher engagement. They need to start changing their formula for how they consider engagement or how they optimize their algorithms for something other than engagement.


Great leaders ask great questions: Here are 3 steps to up your questioning game.

Having a good arsenal of questions at one’s disposal is a must for any leader, but the one staple of any leader is the open-ended question. Asking open-ended questions is like adjusting the lens of a camera, opening the aperture to create a wider field of view. This wider field sets a tone of receptivity, signaling that you are open to new information, in learning mode, and ready for a dialogue not a monologue. ... You may have heard the term active listening. It involves paying close attention to words and nonverbal actions and providing feedback to improve mutual understanding. But have you ever stopped to consider passive listening? Passive listening also involves listening closely to the speaker but without reacting. Instead, passive listening leaves space for silence. By combining both of these modes, we achieve what we call effective listening. ... One of the most powerful response techniques is the ability to ask questions. Questions frame the issue, remove ambiguity, expose gaps, reduce risk, give permission to engage, enable dialogue, uncover opportunities, and help to pressure-test logic.


The 10 Immutable Laws of Testing

The bug count measures what annoys our users the most - Bugs aren’t a measure of quality (that’s measured by things like fitness for purpose, reliable delivery, cost and other stuff). But bugs are what annoy our users most. If you don’t believe me, consider this: over 60% of users delete an app if it freezes, crashes or displays an error message. Cue P!nk. Bugs exist because we write them into our code: Complexity defeats good intentions - We all know where bugs come from: Developers writing code (enabled by users who want new functionality). Bugs are the visible evidence that our code is sufficiently complicated that we don’t fully understand it. We don’t like creating bugs and wish we didn’t do it and have developed some coping skills to address the problem … but we still write bugs into our code. Bugs (like tchotchkes) accumulate over time—every time we add or change functionality, to be precise - Everyone has an Aunt Edna where the inevitable result of her going out is that she brings home some new thing to put on a shelf. The inevitable result of creating software is more bugs (and, yes, more/better functionality). 


Reliable Continuous Testing Requires Automation

Automation makes it possible to build a reliable continuous testing process that covers the functional and non-functional requirements of the software. Preferably this automation should be done from the beginning of product development to enable quick release and delivery of software and early feedback from the users. ... We see more and more organizations trying to adopt the DevOps mindset and way of working. Velinov stated that software engineers, including the QA engineers, have to care not only about how they develop, test, and deliver their software, but also about how they maintain and improve their live products. They have to think more and more about the end user. Velinov mentioned that a significant requirement is and has always been to deliver software solutions quickly to production, safely, and securely. That’s impacting the continuous testing, as the QAs have to adapt their processes to rely mainly on automation for quick and early feedback, he said.


Seven Principles I Follow To Be a Better Data Scientist

Data science is an ever-changing field, thus keeping up with the latest trend and techniques is essential in ensuring consistent performance at work. For data scientists who keep a full-time job, it is unrealistic to spend weeks learning something new to be able to apply it to your working projects. We need to learn fast, and one way to achieve this is through learning by doing. Rather than getting lost in too many details and background information in a new concept, the fastest way to fully grasp it is to follow a trustworthy practical tutorial and replicate it, then try to make customized innovations to achieve better results in your projects. Take an example of learning the Random Forest algorithm. We sure need to know some basics about the algorithm — what it is, where it can be used, etc. Then we just use it in a current project, following some tutorials, and see what the results are. Blog posts with examples are great sources to educate yourself fast, compared to textbooks, or online courses. Lastly, we troubleshoot the results and look for ways to improve the application of the algorithm.


What Good Security Looks Like in a Cloudy World

When it comes to security issues and fixes, it is extremely important to be able to differentiate between new and old findings because this will also eventually affect the next two pillars: prioritization and remediation. One of the things DevSecOps tools have made possible is a real-time understanding of what’s happening in our code, with processes aligned with developer workflows, such as fixes at commonly accepted gates, like pull requests, and even earlier with precommit hooks or in-IDE alerts. A similar approach to the way we prevent issues from being merged into our code base through common CI gating can be applied to runtime-related tools during the CD phase. In this way, you can prevent runtime-related issues from reaching production, as well. So if we are able to discover security flaws while we’re still coding or in predeployment to production systems, these can be handled now and within the developer or operational context and need never go into the backlog. This is a very important distinction between our categories of security issues.


Avoiding the Top Mistakes Made by Tech Startups

Scaling too quickly increases a startup's burn rate, reducing the time it has to demonstrate key metrics for its next funding round and other milestone events, Yépez explains. Such a startup can also trash trusted customer relationships by failing to deliver goods or services as promised. “That burned cash won’t come back, and neither will that customer,” he cautions. Conversely, limited funding forces some struggling businesses to assign staff members tasks that fall outside of their skillsets. “These responsibilities often suffer from poor execution and may have severe consequences for the startup,” says Thomas Dolan, co-founder of 28Stone Consulting, an IT and fintech consulting firm. Many startups also neglect to protect their intellectual property. In their rush to go to market, some founders unwittingly disclose their core technology, or offer their core technology, to potential investors and other external parties. Such activity triggers deadlines for filing patent applications, says Kyle Graves, an attorney at law firm Snell & Wilmer.


Becoming “cloud smart” — the path to accelerated digital innovation

“Cloud chaos” comes from a landscape of unknowns. What is our enterprise cloud architecture? How do public and private clouds co-exist? What about edge computing? How do we align legal and compliance requirements in the multi-cloud world for heavily regulated industries such as fintech? Those daunting tasks and risks reflect the multi-cloud complexity and chaos we constantly live in. Having worked with many organisations transitioning away from “cloud chaos”, I see similar challenges regardless of the size of the business. It takes a vast amount of effort to architect and manage multi-cloud platforms. Think about scalability, interoperability, consistency, and a unified user experience. Think about the skill sets and knowledge required to build and operate cloud-native apps. Also, think about automating and optimising cloud management, architect cloud, and edge infrastructure. Think about connecting and securing apps and clouds. And finally, think about app security, legal, and compliance among other areas. These challenges keep CIOs up at night.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - October 06, 2022

Addressing the Complexities of Cybersecurity at Fintech Enterprises

Effective IT governance is the cornerstone of cybersecurity as it is about leadership: how leaders treat IT as a cost-center vs. as an enterprisewide strategic asset. Governance is made more complex for central banks and regulatory and complex supervisory authorities due to regulation, supervision and compliance. There are many global models, frameworks and standards that can be referenced for complete cybersecurity governance and management, but ultimately, a mature organization chooses its own preferred guidance. The US National Institute of Science and Technology (NIST) Cybersecurity Framework (CSF).  the US Federal Financial Institutions Examinations Council (FFIEC) Cybersecurity Assessment Tool, the International Organization for Standardization (ISO) standard ISO 27000 and COBIT® are valuable resources for effective IT governance. These frameworks clearly describe roles and responsibilities of top management, importance of IT strategic alignment to achieve the enterprise objectives, importance of leadership and top management support to address IT and cybersecurity issues, importance of effective IT risk management, and proper reporting strategies.


CIO Guy Hadari on the management skills that set IT leaders apart

As Hadari sees it, “The challenge is that most up-and-coming IT professionals are trained to be technology implementers and innovators, and so are ill equipped for the management aspects of the job,” something that he experienced personally. In his first few years as CIO, Hadari’s comfort zone was data, analytics, and statistics, and that was the lens he used to lead IT. ... Hadari encourages his team to use data, surveys, and conversations to understand the perceptions of IT, and the problems that create those perceptions. He finds that comparing how IT rates itself to how the business rates IT reveals a great deal about where IT needs to focus. “Collecting all of that information is not an easy process, but it is the beginning of change,” says Hadari. “It means that we can accept our challenges, bring them out into the open, and do something about them.” At Biogen, Hadari’s extended leadership team, which is one level below his senior IT leadership team, owns the strategy and plan for IT improvement. “They build it, execute on it, and own it,” he says.
Different employee segments will require different messaging. The IT group will benefit from different messaging than the sales group. Don’t make the mistake, though, of believing IT employees don’t need security awareness—they do. Security teams should take steps to understand employees’ current comprehension of security messaging and where gaps may exist. And, of course, security awareness marketers need to understand the social and behavioral drivers of employee actions. What’s important to them? What motivates them? What are they concerned about? You can then create messaging to address employees’ pain points or motivators—to give them some reason to act, or not act, based on what they hear and learn. ... Security is a journey and a conversation, not a destination and a directive. Thinking like a marketer and taking steps to segment, understand and effectively connect with employees based on their needs, interests and concerns can help to better engage the organization in its cybersecurity efforts.


Young people in tech unhappy despite inclusion push

Almost half of younger people in the tech sector have at some point felt uncomfortable at work because of their gender, ethnicity, background or neurodivergence. Young people not already in the sector claimed they’re not confident about how to make tech their career, with a number of misconceptions about what is involved in a tech career still acting as a deterrent. Almost 15% of the young people asked who were not already in the sector said they know nothing about tech careers, with 29% believing they don’t have the right qualifications for a job in the sector. Women have more doubts about the sector than men – 23% of women believe their maths and science skills aren’t up to scratch enough for a tech job, compared with 13% of men; and 19% of women doubt they’re smart enough for the sector, compared with 13% of men. ... Only 5% of young people said that a lack of ethnic diversity is a deterrent to pursuing a tech career, although this varies based on the ethnicity of the person asked, with the breakdown being: 9% of young people from mixed raced backgrounds, 10% of people from an Asian background, and almost 36% of people from a black background.


4 Reasons Why Talent Development Is So Important To Your Business

In the age of employee turnover and the Great Resignation, organizations in nearly every field are finding it more difficult than ever to attract and retain top talent. As a leader, you need to make talent development a personal priority to stay competitive in recruiting and keeping the best people. Have a solid plan and communicate it widely to both prospective recruits and current employees. A truly thoughtful talent developing program lets people know how much you value them. It strengthens talent in new directions. Employees want to know that their leader sees their potential, and it’s important to be intentional about recognizing and reinforcing the strengths of your people. A one-size-fits-all approach to talent development isn’t good enough—you need to design a program for each individual based on their strengths, their goals and the organization’s needs. When you strengthen your talent, you strengthen your leadership. It improves productivity. According to a recent Gallup study, helping your employees make full use of their employees skills and strengths, and providing them with opportunities for growth and improvement, can make them up to six times more productive. 


8 ways to get out of a career rut

Consider the millennial who felt stuck at a small company with no room for growth. Or the older generation of workers who thought they should retire early because the future was so uncertain and accepting a complete shift to digital felt daunting. For Gen Z, the prospect of never meeting managers or colleagues – because of virtual interviews and remote jobs – was foreign and left some without a sense of belonging. Not only were we physically absent from workspaces, but many of us also struggled mentally with the sudden, enormous changes to our daily routines and goals. It became a time of contemplation, where many professionals began reassessing their careers (and lives). And the realization for many? They felt stuck. What are your options if you want to take a big leap out of your current situation? How do you find motivation, especially after a couple of very stressful years outside of your control? What inspires you to take on a new challenge?


The Dark Side of Open Source

Lack of interest, patience, and time; change of profession and creative differences are some of the issues that push developers to close an open-source project. But the biggest reason why developers quit is that they drain out of energy. People like John Resig, creator of jQuery, and Ryan Dahl, creator of Node.js, too have most likely exited from their respective OSS project because they couldn’t keep up with the energy it demanded. Fakerjs’ Mark Squires’ sentiment was understandable. It’s very difficult to offer non-paid work for a long period of time and at a certain point an open-source project can become more hassle than it’s worth. It also depends, of course, on your motivations for developing open-source software, but more on that later. The best open-source projects are typically those that are maintained by developers who are compensated for their work on them and can maintain a work-life balance. Those who can devote their entire attention on enhancing them.


Back to Basics: Cybersecurity's Weakest Link

Social engineering was a driver for hacking over 20 years ago and, apparently, we still haven't moved away from it. Adding insult to injury, successful social engineering isn't restricted to non-technical organizations. It's very plausible that an unsavvy user in a backwater government department might fall for social engineering, for example, but much less so someone working at a leading tech firm – and we see that both Uber and Rockstar Games were impacted by social engineering. At some point, as a cybersecurity practitioner with the responsibility of educating your users and making them aware of the risks that they (and by extension the organization) are exposed to, you'd think that your colleagues would stop falling for what is literally the oldest trick in the hacking playbook. It's conceivable that users are not paying attention during training or are simply too busy with other things to remember what someone told them about what they can click on or not. However, social engineering attacks have so consistently been in the public news – not just cybersecurity news – that the excuse "I didn't know I shouldn't click email links" is getting harder and harder to accept.


Cyber insurance explained: What it covers and why prices continue to rise

For technology and compliance lawyer Jonathan Armstrong, the most significant driver of change in cyber insurance is demand for financial protection from litigation against organizations in the wake of cyber incidents. “We have seen that an attack or breach can be followed in the next day or so by lawyers claiming that they are investigating litigation against the company that has been hit.” This issue has been under the spotlight recently in the Lloyd v Google case in the UK. Richard Lloyd alleged that Google collected data from around 4 million iPhone users between 2011 and 2012 regarding their browsing habits without their knowledge or consent for commercial purposes, such as targeted advertising. He looked to bring representative action on behalf of all affected individuals against Google for compensation, which Google opposed. The UK Supreme Court sought to establish whether such a claim for a breach of data protection legislation can succeed without distinctive personal damage and if claimants can bring group action on behalf of unidentified individuals, including people who may not even be aware that they were affected.


Achieving faster time-to-market with data management

When companies manage their product data efficiently, they can be flexible while launching their new products. With error-free product data of new items, brands can customise information as per the marketplace and the promotion period. PIM possesses high-quality product information that is scalable, and offers complete freedom to be deployed across any technology environment. Product data can be easily imported from various vendors in multiple file formats and mapped to a single point of truth. ... In the wake of technological advances, fluctuating consumer expectations, competitive pressures, and turbulent market dynamics, operational agility is vital to survive and succeed. Faster time-to-market is one of the parameters that determines business agility. To continuously deliver high-quality, novel, and faster services, companies need to deploy PIM, which enhances product information, and improves conversion rates and customer retention. Businesses can also make data-driven decisions and create joyous customer journeys with the available data. 



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - October 05, 2022

How edge computing will support the metaverse

Edge computing supports the metaverse by minimizing network latency, reducing bandwidth demands and storing significant data locally. Edge computing, in this context, means compute and storage power placed closer to a metaverse participant, rather than in a conventional cloud data center. Latency increases with distance—at least for current computing and networking technologies. Quantum physics experiments can convey information at a distance without significant delay, but those aren’t systems we can scale or use for standard purposes—yet. In a virtual world, you experience latency as lag: A character might appear to hesitate a bit as it moves. Inconsistent latency produces movement that might appear jerky or communication that varies in speed. Lower latency, in general, means smoother movement. Edge computing can also help reduce bandwidth, since calculations get handled by either an on-site system or one nearby, rather than a remote location. Much as a graphics card works in tandem with a CPU to handle calculations and render images with less stress on the CPU, an edge computing architecture moves calculations closer to the metaverse participant. 


Big Gains In Tech Slowed By Talent Gaps And High Costs, Executive Survey Finds

Survey participant, Andrew Whytock, head of digitalization in Siemens pharmaceutical division, crystallized the criticality of employee recruitment, training and retention, explaining, “It’s great having a big tech strategy, but employers are struggling to find the people to execute their plans.” In addition to growth needs, staffing problems extend to fortifying cybersecurity. Nearly 60% of respondents reported that cybersecurity objectives are behind schedule. When asked to identify the “internal challenges” driving delays, executives ranked “lack of key skills” and “cultural obstacles” highest. That’s inexcusable. Lax tech controls and strategy acceleration pressure make a dangerous mix. To thrive, “digitally mature” enterprises need top talent in supportive cultures to unlock the transformative value of their sizable IT modernization investments. ... Despite huge investments in job training and leadership development, broad business perspective remains a widespread skill gap.


How to design a data architecture for business success

“Data architecture is many things to many people and it is easy to drown in an ocean of ideas, processes and initiatives,” says Tim Garrood, a data architecture expert at PA Consulting. Firms need to ensure that data architecture projects deliver value to the business, he adds, and this needs knowledge and skills, as well as technology. However, part of the challenge for CIOs and CDOs is that technology is driving complexity in both data management and how it is used. As management consultancy McKinsey put it in a 2020 paper: “Technical additions – from data lakes to customer analytics platforms to stream processing – have increased the complexity of data architectures enormously.” This is making it harder for firms to manage their existing data and to deliver new capabilities. The move away from traditional relational database systems to much more flexible data structures – and the ability to capture and process unstructured data – gives organisations the potential to do far more with data than ever before. The challenge for CIOs and CDOs is to tie that opportunity back to the needs of the business.


What Is Cloud Orchestration?

Cloud orchestration is the coordination and automation of workloads, resources, and infrastructure in public and private cloud environments and the automation of the whole cloud system. Each part should work together to produce an efficient system. Cloud automation is a subset of cloud orchestration focused on automating the individual components of a cloud system. Cloud orchestration and automation complement each other to produce an automated cloud system. ... Cloud orchestration supports the DevOps framework by allowing continuous integration, monitoring, and testing. Cloud orchestration solutions manage all services so that you get more frequent updates and can troubleshoot faster. Your applications are also more secure as you can patch vulnerabilities quickly. The journey towards full cloud orchestration is hard to complete. To make the transition more manageable, you can find benefits along the way with cloud automation. For example, you might automate the database component to speed up manual data handling or install a smart scheduler for your Kubernetes workloads. 


Introducing post-quantum Cloudflare Tunnel

From tech giants to small businesses: we will all have to make sure our hardware and software is updated so that our data is protected against the arrival of quantum computers. It seems far away, but it’s not a problem for later: any encrypted data captured today can be broken by a sufficiently powerful quantum computer in the future. ... How does it work? cloudflared creates long-running connections to two nearby Cloudflare data centers, for instance San Francisco and one other. When your employee visits your domain, they connect to a Cloudflare server close to them, say in Frankfurt. That server knows that this is a Cloudflare Tunnel and that your cloudflared has a connection to a server in San Francisco, and thus it relays the request to it. In turn, via the reverse connection, the request ends up at cloudflared, which passes it to the webapp via your internal network. In essence, Cloudflare Tunnel is a simple but convenient tool, but the magic is in what you can do on top with it: you get Cloudflare’s DDoS protection for free; fine-grained access control with Cloudflare Access and request logs just to name a few.


What are the benefits of a microservices architecture?

The benefit of a microservice architecture is that developers can deploy features that prevent cascading failures. A variety of tools are also available, from GitLab and others, to build fault-tolerant microservices that help improve the resilience of the infrastructure. A microservice application can be programmed in any language, so dev teams can choose the best language for the job. The fact that microservices architectures are language agnostic also allows the developers to use their existing skill sets to maximum advantage – no need to learn a new programming language just get the work done. Using cloud-based microservices gives developers another advantage, as they can access an application from any internet-connected device, regardless of its platform. A microservices architecture lets teams deploy independent applications without affecting other services in the architecture. This feature will enable developers to add new modules without redesigning the system's complete structure. Businesses can efficiently add new features as needed under a microservices architecture.


Tips for effective data preparation

According to TechRepublic, data preparation is “the process of cleaning, transforming and restructuring data so that users can use it for analysis, business intelligence and visualization.” AWS’s definition is even simpler: “Data preparation is the process of preparing raw data so that it is suitable for further processing and analysis.” But what does this actually mean in practice? Data doesn’t typically reach enterprises in a standardized format and, thus, needs to be prepared for enterprise use. Some of the data is structured—like customer names, addresses and product preferences — while most is almost certainly unstructured—like geo-spatial, product reviews, mobile activity and tweets. Before data scientists can run machine learning models to tease out insights, they’re first going to need to transform the data, reformatting it or perhaps correcting it, so it’s in a consistent format that serves their needs. ... In addition, data preparation can help to reduce data management costs that balloon when you try to apply bad data to otherwise good ML models. Now, given the importance of getting data preparation right, what are some tips for doing it well?


Optimizing Isolation Levels for Scaling Distributed Databases

The SnapshotRead isolation level, although not an ANSI standard, has been gaining popularity. This is also known as MVCC. The advantage of this isolation level is that it is contention-free: it creates a snapshot at the beginning of the transaction. All reads are sent to that snapshot without obtaining any locks. But writes follow the rules of strict Serializability. A SnapshotRead transaction is most valuable for a read-only workload because you can see a consistent database snapshot. This avoids surprises while loading different pieces of data that depend on each other transactionally. You can also use the snapshot feature to read multiple tables at a particular time and then later observe the changes that have occurred since that snapshot. This functionality is convenient for Change Data Capture tools that want to stream changes to an analytics database. For transactions that perform writes, the snapshot feature is not that useful. You mainly want to control whether to allow a value to change after the last read. If you want to allow the value to change, it will be stale as soon as you read it because someone else can update it later.


Why IT leaders should embrace a data-driven culture

Data tells the story of what works – and perhaps more importantly, what doesn’t work – for your team. It provides a clear and unbiased picture of how new transformations are netting out and where opportunities lie to increase efficiency and value. Utilizing the right metrics reveals which innovations are most effective for the team, letting IT managers know how transformations are running. Focusing on these results helps organizations streamline business processes and leads to higher team productivity. It also puts IT leaders on the path to sunset legacy solutions that require large budgets or lots of manual work to keep them functional. These changes impact all business areas, allowing employees anywhere and everywhere – not just those in IT – to be more innovative and effective. ... As business leaders focus on meeting the needs of today’s evolving workforce and customers’ desires, operating with a data-driven strategy lets managers stay agile and confident in their next steps. Allowing data to drive decisions also provides a means to back those decisions with clear evidence.


Who is responsible for cyber security in the enterprise?

Alarmingly — or perhaps unfairly — only 8 per cent of executives said that their CISO or equivalent performs above average in communicating the financial, workforce, reputational or personal consequences of cyber threats. At the same time, under 15 per cent of executives gave their CISOs or equivalent a top rating from a scale of one to ten. Maintaining a bridge between business and tech is vital when it comes to ensuring all are on the same page regarding security. “It is no surprise that one of the main challenges companies face when implementing a cyber risk mitigation or resiliency plan is the communication gap between the board and the CISO,” said Anthony Dagostino, founder and CEO of cyber insurance and risk management provider Converge. “Cyber resiliency starts with the board because they understand risk and can help their organisations set the appropriate strategy to effectively mitigate that risk. However, while CISOs are security specialists, most of them still struggle with adequately translating security threats into operational and financial impact to their organisations – which is what boards want to understand.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley

Daily Tech Digest - October 04, 2022

One aspect is to implement change management on the automation, including the scripts, config files, and playbooks, used to manage the network. The use of code management tools helps with this: check-out and check-in events help staff remember to follow other parts of proper process. Applying change management at this level means describing the intended modifications to the automation, testing them, planning deployment, having a fallback plan to the previous known-good code where that is applicable, and determining specific criteria by which to judge whether the change succeeded or needs to be rolled back. ... Putting in place automation to lock-in a network state is a change management event, and in a sense, a change to the architecture; creating it and putting it into production needs to go through the whole approval and deployment process, and all future changes need to be made with its presence and operation in mind—considering it has to be part of future change management evaluations.


The Impact of Cybersecurity on Consumer Behavior

In addition to imperiling consumers’ PII, cyberattacks also cause consumers to feel helpless about their ability to protect their own data. According to ISACA’s survey report, about one in five consumers in the US, UK and Australia experience a sense of resignation that there is nothing they can do to protect themselves from cybercrimes. Nearly half of consumers in the US, UK and Australia think that they are likely to be a victim of cybercrimes. Although the initial cyberattack occurs just once, the lasting impacts of that attack continue for an unknown amount of time. If consumers’ data are stolen during cybercrimes and are subsequently sold to malicious actors, one attack can turn into a headache of fraud, identity theft and social engineering scams for the foreseeable future. Cyberattacks that compromise personal medical information in the healthcare industry or important account details in the financial services industry can cause emotional and financial stress. In the United States, the public is beginning to worry about state-sponsored cyberattacks against national security and defense systems and government agencies, in addition to their own personal information.


Digital transformation is brewing at Heineken

Heineken says it is fully committed to the path to net zero – and that there are efforts around the organisation to achieve this goal. Sustainability is top of mind in the strategies and tactics for digital transformation. “We have several fully green breweries,” said Osta. “This started in Austria a few years back with Goesser and is now being replicated in markets including France and Brazil. We also have 3D printers in 40 breweries, with 25 more in plan for this year. 3D printing on-site is very effective when it comes to spare parts management as it reduces carbon emissions. “There is also an incredible effort being made on the data side in terms of what we can estimate and measure. We are always looking at emerging data standards for better quality data to exchange across the ecosystem with our suppliers. The challenge is that often in sustainability we are faced with dark data – data that is critical but not collected or visible. “The corporate value chain (Scope 3) reporting requires an ecosystem approach of data exchange. 


Digital Identity Bill Passes Key Senate Milestone

The legislation stops short of mandating national IDs. It would create a task force to create standards and recommend a voluntary program for states, local, tribal and territorial governments to verify identities online for "high-value transactions." About a half-dozen states have already rolled out mobile drivers licenses in the pilot phase. Nationwide standards would help ensure these new IDs are secure and provide a guide for others states. Grant says online verification could be offered in a variety of forms, such as on-demand validation services, which could become part of the credit card application process, or a mobile app on smartphones that people could carry in their pockets. "Identity is very personal," Grant says. "You're probably going to need to create a few different channels for Americans to be able to tap into these authoritative sources. I'd be thrilled to have a mobile driver's license app on my phone. Others would say, 'I don't want to have an app from the government on my phone.'"


Do You Fit Cybercriminals’ Ideal Victim Profile?

The Bad Actors Know About You. And they know exactly why you keep putting off addressing your cybersecurity vulnerabilities. Don’t give attackers any more advantages when it comes to breaching your law practice. My advice?Be more reticent when it comes to sharing personal information on social media. (For example, if you work from home, register as an online business when you set up your Google Business profile so that your physical address and photos of your home won’t show up on Google Maps.) Be less trusting of seemingly friendly messages and emails that cross your transom. While technology solutions can greatly improve your defenses, humans are the last line of defense. Don’t click on attachments from unknown senders. If a large file arrives from someone you haven’t heard in for a long time, call them to say hello and ask about the email before you click. Be more vigilant in general — including asking qualified cybersecurity professionals to assess your current level of protection and recommend safeguards. Rereading this, even I got stressed.


Five Data-Loading Patterns To Boost Web Performance

No one likes a white blank screen, especially your users. Lagging resource loading waterfalls need a basic placeholder before you can start building the layout on the client side. Usually, you would use either a spinner or a skeleton loader. As the data loads one by one, the page will show a loader until all the components are ready. While adding loaders as placeholders is an improvement, having it on too long can cause a “spinner hell.” Essentially, your app is stuck on loading, and while it is better than a blank HTML page, it could get annoying, and visitors would choose to exit your site. ... Modern JavaScript frameworks often use client-side rendering (CSR) to render webpages. The browser receives a JavaScript bundle and static HTML in a payload, then it will render the DOM and add the listeners and events triggers for reactiveness. When a CSR app is rendered inside the DOM, the page will be blocked until all components are rendered successfully. Rendering makes the app reactive. To run it, you have to make another API call to the server and retrieve any data you want to load.


What Will it Take to End the Public Sector’s Cybersecurity Talent Gap?

The public sector can be deliberately hard to understand. From the multiple terms and acronyms used to describe programs and agencies, to an incredibly complex technological infrastructure, beginning a career in government can seem daunting. That is compounded when realizing even entry-level roles often require at least five years of experience. Many cybersecurity job descriptions highlight requirements for certifications and achievements, which can only be earned after a certain amount of time in the field. Instead of having such high expectations for entry-level candidates, which will only continue to leave hundreds of jobs unfilled, government agencies need to update their job descriptions to be truly entry-level and seek out college graduates or individuals who might have just completed a cybersecurity bootcamp or training program—and who have yet to gain any experience. It would also be beneficial to look at talent that might not come from a STEM field. Candidates with backgrounds in history or English can bring skills like analytical thinking and communication to the table—skills that are often a lot harder to teach than computer science.


8 strange ways employees can (accidently) expose data

Video conferencing platforms such as Zoom and Microsoft Teams have become a staple of remote/hybrid working. However, new academic research has found that bespectacled video conferencing participants may be at risk of accidently exposing information via the reflection of their eyeglasses. ... Users may not associate posting pictures on their personal social media and messaging apps as posing a risk to sensitive corporate information, but as Dmitry Bestuzhev, most distinguished threat researcher at BlackBerry, tells CSO, accidental data disclosure via social apps such as Instagram, Facebook, and WhatsApp is a very real threat. “People like taking photos but sometimes they forget about their surroundings. So, it’s common to find sensitive documents on the table, diagrams on the wall, passwords on sticky notes, authentication keys and unlocked screens with applications open on the desktop. All that information is confidential and could be put to use for nefarious activities.”


Used servers: Bargain or too good to be true?

Used equipment can run as well as new equipment “when you find the right seller,” says Peter Strahan, founder and CEO of Lantech, a managed IT services provider. “This allows you to rapidly cut the costs of a data center with used equipment.” In addition, deploying used IT equipment is generally good for the environment, Strahan says. “While the equipment could theoretically be recycled, it takes a lot of manpower,” he says. “Finding a use for it after it becomes obsolete saves a lot of time and money when it comes to recycling and stops the equipment going to the landfill.” A lot of companies “value the ‘green’ benefits of redeploying hardware,” says Cameron James, executive vice president of CentricsIT, a global IT services provider. “The best way to reduce IT waste is to use any product to its maximum lifespan, without compromising on performance. This is easy to do. Many used products are N-1—just one generation back from the latest OEM lines.” It can also make sense to buy used equipment if an organization has moderate powering needs in its data center, Strahan says. “If you have large powering needs, you will need the most efficient equipment,” he says.


Carbon copies: How to stop data retention from killing the planet

So what can be done about it? It is a question that has been plaguing the IT industry for years, and the lack of a definitive answer often makes it easier to just turn on another air-conditioning unit and look the other way. But that’s causing even more harm. So what are the alternatives? Storing less data appears to be an obvious answer, but it would be almost impossible to implement, because who decides what parameters are worth recording and what are not? The BBC learned this the hard way when it trashed much of its TV archive during the 1970s and 1980s, assuming that it would be no use. Then came the VCR, the DVD player and, of course, streaming. Ask any Doctor Who fan and they will grimace at the number of early episodes of the long-running Sci-Fi series that have been lost, perhaps forever, because of a lack of foresight. So, that’s the case to justify digital hoarding. But it all has to be stored somewhere, and those facilities have to be environmentally controlled.



Quote for the day:

"Leadership cannot really be taught. It can only be learned." -- Harold S. Geneen

Daily Tech Digest - October 03, 2022

Roadmap to RPA Implementation: Thinking Long Term

Ted Kummert, executive vice president of products and engineering at UiPath, says RPA should be viewed as a long-range capability meant to empower organizations to evolve strategically and increase business value. It is a journey that can start small, within one division or one department, and grow organically across the business as additional ideas form and the organization’s vision for automation’s potential comes to fruition. He says RPA can clear backlog, create new capacity, and free up resources, and improve data quality by integrating software robots into workflows. “It is a truly transformative technology that can reduce or eliminate manual tasks and elevate creative, high-value work,” Kummert says. “Digital transformation is often talked about, but many times can fall short of its goals. Automation is the driver to achieve true digital transformation.” Adam Glaser, senior vice president of engineering for Appian, says many businesses use one automation technology, adding third-party capabilities in patchwork fashion to automate complex end-to-end processes.


How to start and grow a cybersecurity consultancy

To be successful, an entrepreneur must be resilient. Any comment that runs along the lines of “That’s not possible,” or “That can’t be done” should be treated as a challenge to prove the speaker wrong. An entrepreneur needs to have the ability to see through what’s not important. Entrepreneurs don’t just need money – they also need support in the form of encouragement and advice. I would advise budding entrepreneurs to attend meetups within their industry or local community and seek out online support via forums and groups. You’ll be surprised just how willing others will be to help and offer advice for free. Asking questions, getting reassurance and sanity checks from peers can be invaluable at all stages of your businesses journey. There will always be someone a little further down the path you’re taking. Starting a business can be exhilarating, rewarding and fun, but can be exhausting, relentless and stressful in equal measure.


Surveillance tech firms complicit in MENA human rights abuses

“When operating in conflict-affected or high-risk regions as the MENA region, the surveillance sector must undertake heightened human rights due diligence and, if it cannot do so or it identifies evidence of harm, it should stop selling its technology to companies or governments,” said Dima Samaro, MENA regional researcher and representative at the Business & Human Rights Resource Centre. “Lack of adequate due diligence measures by private companies will only worsen the situation for those from marginalised communities, putting their lives in jeopardy as the absence of robust regulation and effective mechanisms in the region allows surveillance technologies to be operated freely and without scrutiny.” The report added that, although the United Nations’ (UN) Guiding principles on business and human rights were adopted a decade ago – which establish that companies must take proactive and ongoing steps to identify and respond to the potential or actual human rights impacts of their business – the principles’ non-binding, voluntary nature means there are “glaring gaps in human rights safeguards” at the firms.


How companies can accelerate transformation

Ensuring that customer value drives technology architecture and investment is one way to optimize technology usage. Another way is to ensure that an organization is getting the most out of the investments it has already made. Inefficiency in any aspect of technology usage represents a drag on businesses’ ability to change quickly. ... While enterprise architects (EAs) play a central role in identifying opportunities for this type of technology optimization, they have an even greater role to play when it comes to optimizing the entire IT landscape. A “business capability” perspective makes this possible. ... Efficiency doesn’t improve on its own. The business needs to decide to improve it. Making those decisions, however, is not always easy. As mentioned, relying on business capabilities to evaluate technology needs is one way to simplify the decision process. The other is visibility. Business leaders can’t make decisions if they can’t see the problem. In terms of business architecture, EAs help guide leaders in the decisions they make by showing them business capability maps, data-rich process diagrams and dashboards highlighting the connection between architectural issues and business value.


Optus reveals extent of data breach, but stays mum on how it happened

Optus says its recent data breach impacted 1.2 million customers with at least one form of identification number that is valid and current. The Australian mobile operator also has brought in Deloitte to lead an investigation on the cybersecurity incident, including how it occurred and how it could have been prevented. Optus said in a statement Monday that Deloitte's "independent external review" of the breach would encompass the telco's security systems, controls, and processes. It added that the move was supported by the board of its parent company Singtel, which had been "closely monitoring" the situation. Elaborating on Deloitte's forensic assessment, Optus CEO Kelly Bayer Rosmarin said: "This review will help ensure we understand how it occurred and how we can prevent it from occurring again. It will help inform the response to the incident for Optus. This may also help others in the private and public sector where sensitive data is held and risk of cyberattack exists." In its statement, Optus added that it had worked with more than 20 government agencies to determine the extent of the data breach.


Why cyber security strategy must be more than a regulatory tick-box exercise

While technology plays a critical role in an effective cyber security strategy, it alone does not provide the solution. Business leaders must also consider the organisation’s processes and people. If organisations don’t have the right processes or people in place to manage new technologies, it can be easy to revert to old habits. Many organisations opt for a hybrid Security Operations Centre to underpin their MDR strategy, which combines the cyber skills of in-house engineers, cyber security teams and an MSSP to create a single facility. MSSPs fill in the gaps in defences while upskilling in-house teams to stay on top of changing threats and technologies. This approach can also free in-house staff to drive projects and internal improvements while the MSSP takes the lead on high value incidents. If the goal is to improve cyber security whilst meeting your organisational goals, then regulations will only ever go so far in tackling the issue. Attacks will continue to plague all sectors and proper detection, response and remediation will be what makes the difference between those that make the news and those that don’t.


Mozilla is looking for a scapegoat

Not so long ago, Microsoft’s Internet Explorer dominated market share. Antitrust authorities helped change that, but Google, not Mozilla, stepped up to take Microsoft’s place, yet without the bully pulpit of a dominant operating system. Meanwhile, as far back as 2008, I was writing about Mozilla’s chance to make Firefox a true community-developed web platform. It didn’t succeed, though Mozilla has gifted us incredible innovations such as Rust. Clearly there are smart people at Mozilla and they have demonstrated the ability to push the envelope on innovation. But not with Firefox. DuckDuckGo has carved out a growing, sizeable niche in privacy-oriented search, but Mozilla keeps losing similar ground in browsers. Why? In its report, Mozilla says browser freedom has been “suppressed for years through online choice architecture and commercial practices that benefit platforms and are not in the best interest of consumers, developers, or the open web.” This would be more credible in Mozilla’s mouth if this weren’t the same company that completely mismanaged its entrance into the mobile market.


Indonesia Data Protection Law Includes Potential Prison Time

The Indonesia data protection law took some eight years to come to fruition, with contentious ongoing debate about what government body should oversee the new regulations and exactly how strong the penalties should be. A recent wave of cyber attacks and data breaches in the country seems to have prompted legislative action; Kaspersky reports that the country experienced 11.8 million cyberattacks in the first quarter of 2022, a 22% increase from the prior year, and the country has become the leading target for ransomware attacks in Southeast Asia. This includes data breaches of various government agencies, one of which exposed the vaccination records of President Joko Widodo. Stats from SurfShark indicate that Indonesia now has the third-highest rate of data breaches in the world. Regulation oversight has fallen to the executive branch, with the President slated to form an oversight body tasked with determining and administering fines. Similar to the EU’s General Data Protection Regulation (GDPR), which the Indonesia data protection law drew from substantially, there is a maximum potential fine of 2% of global annual turnover for violations.


How To Protect Your Reputation After A Hack Or Data Breach

Part of transparency and recovery is working with the relevant authorities and experts to track the scope of the breach. A post-mortem analysis can be critical. For one thing, it can determine what data was stolen, by who and how. It can also help track where that data ends up and how it is used. In cases where the cause has something to do with software or hardware being exploited, it can be essential to inform the developers or manufacturers of the breach and how it occurred. They may also need to issue patches or recalls to prevent other businesses using that hardware or software from being compromised. No business stands alone. ... Recovery after a breach is a sensitive time. You will undoubtedly see a deluge of negative reviews and bad press, which will be difficult to counteract. Clear and transparent messaging is part of it; breaches happen, and there's no surefire way to avoid them. Demonstrating that your data security policies prevented usable data from being stolen or that you've been able to protect users proactively can be critical to repairing your reputation.


Data quality is at the heart of successful data governance

The downstream effects of data quality have ramifications felt throughout data governance efforts. Recent findings from a survey by Enterprise Strategy Group showed that data management is greatly challenged by a lack of visibility and compounded by data quality issues. Concerningly, 42 percent of all respondents indicated at least half of their data was “dark data” - retained by the organization, but unused, unmanageable, and unfindable. An influx in dark data and a lack of data visibility often leads to downstream bottlenecks, impeding the accuracy and effectiveness of operational data. Data quality was the top driver for organizations’ data governance programs but was also the top challenge that these organizations have to overcome to maximize the return on their data governance efforts. When you consider the fact that many organizations are experiencing data quality issues, which are difficult to manage, and in many cases have significant amounts of data that is dark, there is a clear need for more robust data governance solutions providing data landscape transparency united with business context and guidance.



Quote for the day:

"Perhaps the ultimate test of a leader is not what you are able to do in the here and now - but instead what continues to grow long after you're gone" -- Tom Rath