Daily Tech Digest - October 04, 2021

4 Misconceptions about DevSecOps Every CIO Should be wary of

True DevSecOps, like DevOps, necessitates a harmonious collaboration of people, processes, and tools. It’s a culture, automation, and platform design approach that emphasizes security as a shared responsibility across the IT lifecycle. DevSecOps is, in fact, a human as well as a technical challenge. Personal development, culture, and connections with teams and managers are all critical factors in forming a successful DevSecOps team.  ... Cloud and cloud-native software and infrastructure are ideal fit for DevSecOps. It is, nonetheless, useful for a wide range of environments, particularly those who continue to apply a ten-year-old security playbook to their risk profile. Containerized cloud-native environments aren’t the only place where DevSecOps can be used. Some of the technological and process features of DevSecOps, as well as the general shift toward rapid, iterative development cycles – work well with micro-services architecture, but not as well with big monoliths’ many dependencies and extensive test cycles. However, most organizations may benefit from DevSecOps’ cultural features, particularly those that have traditionally considered security as a pre-deployment checkbox rather than a priority ingrained throughout the organization.


Are You Too Late to Start Your Data Science Journey?

What concerned me the most about being too late was not the amount of materials I needed to learn. I’d rather have doubts if I would be able to find a job by the time I learned enough. Data science was a pretty hot topic and there were quite a number of people already working in this field.In the last three years, I have been not only learning data science but also observing the dynamics of this field. My thought about being too late changed. I was not too late to start back then. Moreover, if I started learning data science today, I would not be too late either. ... The biggest challenge for those who want to make a career change to work in data science is finding the first job. I faced the same challenge and it took me about two years to land my first job. This issue is not related to if you are too late to start learning data science. The jobs are out there and increasing. However, without prior job experience, it is difficult to demonstrate your skills and convince employers or recruiters. 


3 fading and 3 future IT culture trends

Whether your IT team is remote, hybrid, or back in the office, all the pivots of 2020 made it clear just how crucial digital transformation is for business. But more than that, it’s important to have the right tech stack – one that’s simple, efficient, and centralized, not scattered or complicated. Adobe Workfront’s State of Work 2021 report indicates that 32 percent of employees have left a job due to inadequate technology that was a barrier to their workflow, and another 49 percent are likely to quit if the tech stack is frustrating or hard to use. IT leaders must scale down their technology in order to consolidate tools and software programs for maximum efficacy. ... While we’re on the subject of a centralized tech stack, let’s talk about the newer trend that has made an imprint on IT culture: the cloud-based workspace. Part of a tech solution called Infrastructure as a Service (IaaS), this digital hub is hosted in the cloud but accessible wherever there’s an internet connection. A cloud-based workspace also eliminates the need for complex hardware or equipment since workers can access it from a wireless device. 


Looking into the future of the metaverse

What will make or break the metaverse will be its ability to capture data from its surroundings and even the biosphere. The only way to do that will be by mass ingestion of the data coming from the Internet of Things. Only with this data will you be able to create a rich and meaningful environment. The next need after “seeing” will be “interacting,” meaning that the data not only needs to be represented in a meaningful way but also must be responsive. On the lowest level, equal to the physiological needs of humans in the real world, you can imagine the needs of a digital infrastructure in the metaverse: tools for ingestion of and access to data and the infrastructure to store, analyse and enrich data. But just like in the real world, before any meaningful interactions can be achieved, security needs to be guaranteed. With all the attention on the exciting possibilities of the metaverse, you could forget what infrastructures will be needed for the heavy lifting. It would have to be optimised for transferring and storing data. To make the metaverse attractive, not only would historical data need to be available, to facilitate context and depth in any interaction, but it would also have to be highly accurate.


5 Practical Steps To Protect Your Business From BYOD Security Risks

In general, personal mobile devices should not be considered the employee’s primary device – they should only be considered a convenience to access chat, email and other cloud apps when using a more secure device is not an option. Note that a VPN is needed when in a public place and an unsecure Wi-Fi network is the only option. Again, it is recommended the employee use their company-provided and managed laptop, not a personal mobile device. Many usage policies actually prohibit employees from connecting to unsecured Wi-Fi in the first place, which solves the problem. ... Another important step to protecting your business against BYOD risks is to create a list of accepted devices for accessing company data. Without a thorough list of the number of BYOD devices in use within an organization’s ecosystem, it’s extremely difficult to effectively measure and mitigate the risk that this poses. Knowing the number of personal devices being used for business tasks allows you to require specific security measures for each type of device. 


How Can Leaders Prepare for the Unexpected?

With the impacts of an inflection point clear, how do organizations operate in a timely fashion to plan and then respond? Francis said, “I tried to use the past to potentially predict future. It didn’t work. Given this, I gather all the critical players together routinely. At the same time, I let the pros do their job and I focus on clearing the way of obstructions.” To be able to do this, Young said it is "important to hire good people, empower them, give them resources they need to operate at the best of their ability, and let them do their jobs. The basics of practicing disaster recovery/business continuity should be built into organization DNA.” CIO Martin Davis claimed, “it is important to think through common scenarios and workout how you would handle them and ensure you have game plans on the shelf that can be adapted for the unexpected. Ensure you learn from previous and have practical advice ready to use and people with the right training.” To do this, Gildersleeve said the organizations needs clear definitions for who is responsible for what areas in advance of the unexpected. 


Learn the Blockchain Basics - Part 9: Blockchain Around the World

From the perspective of a technician, the blockchain is: A transactional platform and distributed accounting ledger using cryptocurrency tokens as a representation of a specific value at the current time (same as fiat). That means that a transaction is carried out by the blockchain nodes, and every member of this blockchain party has a copy of this transaction on their computer (node). Everybody verifies if the entities that are about to do a transaction have enough funds to make this transaction happen. You are basically announcing to all members of this system that you are about to make something happen and, even though this action is happening between two peers, the rest of the network verifies and records the transaction. It is a computing infrastructure that uses the power of the decentralized database with linear cell-space structure, published in a semi-public way (also known as “the block”). It’s an open-source software operating on a development platform of the future. The trust service layer, in combination with Peer to Peer (P2P) network, handles microtransactions and large-value transactions as well - allowing two users to do the same things that a bank would need to do on their behalf.


Donald Knuth on Machine Learning and the Meaning of Life

“The word open source didn’t exist at that time,” Knuth remembers, “but I didn’t want proprietary rights over it, because I saw how proprietary rights were holding things back.” Knuth remembered how IBM had allowed other companies to make their own compilers for IBM’s Fortran programming language — whereas things were different in the typography industry. “Each manufacturer had their own language for composing pages, and that was holding everything back…” But in addition, due to the success of his programming books, “I didn’t need the income! I already had a good job, and people were buying enough books that it would bring me plenty of supplemental income for everything my kids needed for education, whatever,” he said. Referring to a familiar structure in Boolean logic, Knuth quips that income “is sort of a threshold function” — that is, it basically just needs to determine whether a certain minimum has been exceeded. “And so I could specifically see the advantage of making it open for everybody…”


6 data center trends to watch

The struggle to attract and retain staff is an ongoing problem for many data-center owners and operators. Among respondents, 47% report difficulty finding qualified candidates for open jobs, and 32% say their employees are being hired away, often by competitors. In the big picture, Uptime projects that staff requirements will grow globally from about 2 million full-time employee equivalents in 2019 to nearly 2.3 million in 2025. According to Uptime: “New staff will be needed in all job roles and across all geographic regions. In the mature data-center markets of North America and Europe, there is an additional threat of an aging workforce, with many experienced professionals set to retire around the same time—leaving more unfilled jobs, as well as a shortfall of experience. An industry-wide drive to attract more staff, with more diversity, has yet to bring widespread change.” The notion of sustainability is growing in importance in the data-center sector, but most organizations don’t closely track their environmental footprint, Uptime finds. Survey respondents were asked which IT or data-center metrics they compile and report for corporate sustainability purposes. 


Combating vulnerability fatigue with automated security validation

Legacy vulnerability management tools flood security teams with long lists of community prioritized vulnerabilities – there were more than 15,000 vulnerabilities found only in 2020. Of these, only 8% were exploited by attackers. Not to mention the top 30 recently reported by CISA. Currently, it’s a cat and mouse game that the customer can never win – chasing an ever-growing list of vulnerabilities without knowing whether they fixed the ones that attackers want to target, exposed the most risk-bearing vulnerabilities, checked if there is an active exploit for a specific vulnerability, or analyzed what the possible risk and impact is that may originate from a vulnerability. All that context is required for security and IT teams to reduce the risk, maintain business continuity, and be a step ahead of the adversary. Unfortunately, the chase for more and more vulnerabilities has kept us away from the goal of where we want and need to be. At this stage of the battle with cyber adversaries, CISOs can’t go backward into the world of vulnerability fatigue.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - October 03, 2021

What Is A Blockchain Wallet & How Does It Work?

Blockchain wallet is digital software that runs on a blockchain and which stores private and public keys as well as monitors and keeps all the transactions related to those keys on a blockchain. Ideally, a blockchain wallet does not store crypto rather all the records relating to these keys are stored on the blockchain on which the wallet is hosted. What it means is that the wallet provides an ID to enable the tracking of all transactions associated with that ID. The blockchain ID is the blockchain wallet address, which is associated with the public key and the private key. Practically speaking, blockchain wallets allow users to store, send, receive, and manage their digital assets on the blockchain.  ... Modern crypto wallets come with integrated APIs to pull data from other platforms. Others can pull data to allow doing charting and crypto market analysis to enable a user to make trading decisions for cryptocurrencies profitably; social features to allow emailing and chatting with other users online or posting status as well as following and copying their trading practices; and transaction tracking including reading history, prices for various cryptos.


Bipartisan US Senate Bill Eyes Cryptomining Oversight

As part of the bill, the Treasury Department would quantify the amount of cryptocurrency mined in the U.S. - and in nations such as China - since 2016. "In order to strengthen U.S. competitiveness, our government must get a better handle on the role that cryptocurrency is playing in the global economy and how it is being leveraged by other countries," Hassan said. Michael Fasanello, who has served in various roles within the U.S. Justice and Treasury departments, including for Treasury's Financial Crimes Enforcement Network, or FinCEN, tells Information Security Media Group that the move "is liable to tax department resources at a time when they ought to be focusing on collaborating with Congress and private industry on appropriately scoped compliance regulation to protect the crypto ecosystem from illicit actors, while … encouraging innovation." Conversely, Neil Jones, a cybersecurity evangelist for the firm Egnyte, tells ISMG, "[This] bipartisan legislation is a breath of fresh air for the cybersecurity industry. ..."


Strategic Planning in the Agile Era

In a world of constant flux, leaders must create strategies that are able to flex and adapt as necessary. As Crawford said, everything starts by leaders creating “a strategy that can evolve over time. Today’s business strategies should anticipate forward looking possibilities. Executing changes need to be nimbler and more responsive.” This means organizations need “to have resource time dedicated to constantly scanning of market and innovation opportunities. You can't respond or get out in front without ensuring people have this as part of their role,” said Young. Organizations clearly need to put in place agile systems and processes that allow let them to not only adapt more quickly but to take stock of the big picture so they can make more informed and strategic decisions. CIO David Seidl’s organization has effectively created a seed fund “with a $50 million investment to do new things, take risks and focus on innovation and creativity. Now we're trying stuff out, learning lessons, and doing an ever-increasing volume of cool stuff.” While the need to take a step back and view the broader picture is clear, unfortunately, Young said, “too many senior people get sucked into operations or day-day project activities.


Virtual Panel: DevSecOps and Shifting Security Left

There certainly is ambiguity and confusion around who exactly is responsible for securing software and the development process - in fact, we recently found in a report that just over half (58%) of security professionals believe it is their responsibility, while a similar number (53%) of developers believe software security falls under their purview. It’s this lack of consensus that is at the crux of today’s biggest cybersecurity challenge: security is not being baked into software during the development process, which has led to destructive cyber repercussions, as we’ve seen recently with the Kaseya, SolarWinds, and Microsoft attacks. It’s just not possible for one team to keep the software build process secure - we need to incentivize developers to work with security teams from the start of development. To be clear: Developers must become responsible and accountable for the security of the software they build and operate. Developers are often prioritizing speed and innovation, and security teams are left to pick up the pieces after software is built to keep it safe from hackers. 


Digital transformation: Thinking beyond the core of your business can help you grow

Many tech leaders have recounted tales of woe of companies that missed transformational shifts in their markets, and perhaps you've referenced Kodak or Blockbuster Video at some point in your career. With the benefit of hindsight, it's all too easy to assume leaders at these companies had grown fat and lazy and willfully ignored the obvious shifts happening before their eyes. However, rather than suffering from a unique and rare collective incompetence, these leaders diligently and dutifully focused on their core business. They probably assumed that transformation was "above their pay grade" or merely a question of applying some novel and interesting technology to today's business with the assumption that they were taking care of areas outside the core. Separating the capabilities and innovative nature of technology from its application ensures that you regularly devote some of your attention, initiatives, and budget to exploring areas outside your organization's core business. You might even be able to leverage seemingly "legacy" technologies that your organization already possesses to areas outside the core and accelerate your company's ability to identify and create truly transformational opportunities with today's tech and skillsets.


Blockchain: How it plays a crucial role in assessment of credit risk in borrowers?

The innovation of blockchain as technology plays an integral role in alleviating the challenges of the traditional lending process mainly in the verification of identities. On the contrary, blockchain is based on distributed ledger technology that decentralises and secures the customers’ data. Simply, it works by keeping the customer data in a distributed ledger instead of centralised storage that also reduces cyber-crime risks. Given the blockchain infrastructure, the profiling of customers becomes accurate, secure and private. Furthermore, all network participants get access to information and record of transactions without affecting the customers’ privacy. The technology of distributed ledger eliminates duplication of record maintenance resulting in the reduction of cost and time involved in the process. Moreover, blockchain is based on immutability which means no participant can tamper with the transaction recorded in the distributed ledger. However, if an error occurs while maintaining the record, it needs to be added to error reversibility which stays visible.


6G technology and taxonomy of attacks on blockchain technology

Most researchers focus only on the blockchain's characteristics or its architecture and propose solutions to overcome some threats or recorded attacks. Instead, the proposed solutions are targeting and enhancing the protocols employed by the blockchain system. Perhaps, that is because the blockchain system is not relatively simple, and it is hard to grasp or untangled its complex architecture. However, without considering the blockchain system's redesigning to alter furtherly, its characteristics would make potential blockchain application's susceptible to different security attacks. ... most of the cryptocurrency's applications share almost a remarkably similar ecosystem, with differences in the consensus protocols. This section focuses on the Bitcoin ecosystem, which is considered one of the origins of the decentralized digital currency, if not the first, but the most associated with the blockchain. Moreover, understanding the Bitcoin ecosystem would, without doubt, set the firm basis for understanding any other existing Blockchain-based ecosystem, let alone innovating new Blockchain-based applications. 


Analysis of Cyber Security In E-Governance Utilizing Blockchain Performance

The block chain architecture comprise of a series of block sequence that encapsulates a total list of transactions; Example; a public ledger. In the block chain architecture, the large block is the block header, which has six other blocks such as parent block hash, nonce, n Bits, time stamp, merkle tree root hash and block version. Within the block header, only one block would be a parent block. In the block chain architecture, the initial block is known as the genesis block, which lacks any parent block. The block chain architecture comprise of blocks and digital signature. A block comprise of a block body and block header as represented in the below figure. The block header comprise of block version that represents the rules of block validation that is to be followed. In a block, the entire transactions’ hash value is represented in the Merkle tree root hash. The present universal time in seconds is represented in the time stamp. A valid block’s hash value is indicated as the target threshold in the n-bits. Nonce indicates a 4 byte field that initiates from zero and rises for each calculation of hash.


Responsible Tech Series 2021 Part 1: exploring ethics within digital practices

AI regulations are starting to take shape, notably in the EU, but with such measures not set to be fully enforced for another few years (implementation of the AI Act in the EU isn’t expected until 2024), Kewley believes that companies aren’t thinking about compliance enough. “Companies think that they’re over the hill when it comes to privacy,” he said. “But compliance isn’t being thought about yet, and it’s now a very real concern to be considered.” Regarding what more can be done to ensure that regulations are suitable globally, Duke suggested keeping track of how products embedded with AI systems, distributed around the world from countries such as China and the US, are designed. “We need a global framework for AI,” she commented. “Work is being done by the US and the World Economic Forum, but this isn’t globally standardised. This needs to be proactive.” On the flipside, a recent survey conducted by Clifford Chance and YouGov, which had participation across Europe and the US, found that 66% of respondents are feeling positive about AI, and Kewley believes that positive discussions about the technology is a step in the right direction.


5 ways leaders can boost psychological safety on teams

Psychological safety starts with the experience of belonging – one of the most basic needs of every human being. However, it is difficult for people to feel that they are part of a shared story if they lack visibility to the most important discussions and decision-making processes in their organization. To address this, I’ve found two things to be especially effective: Sharing openly as much as you can as early as possible, even when you feel you don’t have time; and Co-creating systems that increase transparency in the whole organization. Both take a lot of time, but it always pays off. I schedule weekly updates with my team and also actively use, and invite others to use, systems we have built for improving the flow of information. ... Belonging means not only knowing what’s going on but also feeling close to others. While technology can help with this, it’s not enough. Creating intimacy during these unprecedented hybrid times can be challenging, but small things can go a long way. For example, at Futurice we make a point of sharing our hobbies and interests when we meet new people. 



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing." - Reed Markham

Daily Tech Digest - October 02, 2021

Microservices are Dead — Long Live Miniservices

We tend to think about “microservices” as small, very logic-focused services that deal with, usually, one responsibility. However, if we look at Martin Fowler’s definition of Microservices — and you know, he’s a very smart guy, so we usually like to know what he thinks — you’ll notice we’re missing a very small, yet key, trait about microservices: decoupled. Let’s take a closer look at what we’re calling “microservice”. This term gets thrown around so much these days that is getting to a point where it’s exactly like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it. Truth be told, from 99% of the interviews I take as a manager, when I ask about microservices I get responses about REST APIs. And no, they’re not necessarily the same thing. And by definition, REST APIs alone can’t be microservices, even if you split them up into multiple smaller ones, each taking care of a single responsibility. They can’t, because by definition for you to be able to use a REST API directly, you need to know about it.


Google’s State of DevOps 2021 Report: What SREs Need to Know

SREs may not think of cloud strategy as a core part of their job responsibility. That’s a task that more commonly falls to cloud architects. But simply encouraging their organizations to leverage more reliable cloud architectures can be one way to improve reliability, according to the State of DevOps report. While enhanced reliability is not the only reason why more and more organizations are now expanding into multi-cloud and hybrid cloud architectures, increased availability was the second most common reason for adopting one of these strategies among the professionals whom Google surveyed. The report also noted that organizations with multi-cloud or hybrid cloud architectures were 1.6 times more likely to meet or exceed their performance goals. The takeaway here for SREs is that, although having more clouds to manage creates new reliability challenges in some respects, the data clearly shows that multi-cloud and hybrid cloud lead to better reliability outcomes in the long run. It’s time to let go of your single cloud.


Change management and adaptation for Enterprise Architecture Practitioner

By asking questions like, “Are these applications still relevant?” or “Is this system working?” or “How I can I make this system better?” Assess how you can make a difference to add value and propel your organization to become an industry leader. The complex environment, fueled by continued advances in technology, hinders the ability of the organization to realize value. The enterprise architecture solution will likely not deliver immediate returns (Gong & Janssen, 2021). Kotusev (2018) noted that a rigid approach to enterprise architecture implementation is the worst strategy. Persistent evaluation and adaptation of the EA solution are necessary to signal the need for adaption. It is appropriate to have parts of the EA strategy remain purposively generalized (Alwadain, 2020; Marcinkowski & Gawin, 2019). For example, a flexible EA solution can quickly transition to a SaaS (software as a solution) that delivers more value than on-premises operations. Cooiman (2021) noted that considering operations that directly support and influence portfolios, programs, projects, and business functions, such as supply chain management and payroll. 


The Togaf® Standard Cited As GovTech Solution By The World Bank Group

As the report notes, previous surveys have not captured the full scope of work happening in GovTech in a reliable way. The Open Group has, as its mission, a long-standing focus on the open flow of information – Boundaryless Information Flow™. Transparent information-sharing makes connected systems worth more than the sum of their parts and makes innovation easier to spread. Likewise, the GTMI’s clear view of where progress is being made in government digitalization is something which will, I think, help to accelerate the modernization of public sector services globally. Indeed, much of the report’s key insights are concerned with ensuring that GovTech infrastructure is interconnected and interoperable. Often, it finds, countries have discrete digitalized workflows such as a back-office solution or an online service portal, but are yet to knit these workflows together. Likewise, while digital workflows open the door to two-way information flow with citizens, making services more efficient and responsive, this has seen only limited global rollout.


Working with Metadata Management Frameworks

Get an MMF Baseline: Even if no formal MMF exists in an organization, an implicit one does. Technical documents mapping data architecture, the knowledgeable business analyst who others turn to understand reporting data, and data-entry procedures provide context around an organization’s data and pieces of its MMF. Getting a baseline about what people, processes, and technology already exist and how they inform the organization’s Metadata Management framework just makes sense. Using a “qualified and knowledgeable data professional (and other skilled talents) to administer and interpret data readiness assessments” along with Data Maturity models like those put forth by Gartner, or the Capability Maturity Model of Integration (CMMI), gives a good MMF starting place. Be Clear About What an MMF Will Achieve: Be clear why an organization needs to manage metadata and implement a Metadata Management framework. Metadata Management helps reduce training costs, provides better data usage across data systems, and simplifies communication.


European Blockchain Services Infrastructure (EBSI): the European way to get most out of blockchain

EBSI is designed with a number of core principles in mind: working towards the public good; transparent governance; data compatibility; open-source software; and, compliance with relevant EU regulations such as the GDPR and eIDAS. EBSI would provide a common, shared and open public infrastructure aimed at providing and supporting a secure and interoperable ecosystem that will enable the development, launch and operation of EU-wide cross-border digital services in the public sector. The infrastructure will reflect European values data sovereignty and green credentials in mind and tackle global issues – such as climate change and supply chain corruption. EBSI would thereby deliver public services with high requirements of scalability and throughput, interoperability, robustness, and continuity of the service and with the highest standards of security and privacy that will allow public administrations and their ecosystems to verify information and make services trustworthy. This infrastructure should be deployed within a period of 3 years.


Focus on three areas for a holistic data governance approach for self-service analytics

The right tooling will help you put your governance framework into practice, providing the necessary guardrails and data visibility that your teams need to boost trust and confidence in their data analysis. Perhaps the most fundamental tool for data governance—certainly the greatest help for us here at Tableau—is our integrated data catalog. This enables employees to see data details like definitions and formulas, lineage and ownership information, as well as important data quality notifications, from certification status to events, like if a data source refresh failed and the information isn’t up to date. A data catalog boosts the visibility of valuable metadata right in people’s workstreams, whether that metadata lives in Tableau or is brought in from an external metadata management system via an API. This also helps IT with impact analysis and change management, to understand who and which assets are affected downstream when changes are made to a table.


Private distributed ledger technology or public blockchain?

A centralized DLT is not immutable. The ledger can be rewritten arbitrarily by the one (or more) who controls it or due to a cyberattack. Because of its open and competitive nature (mining, staking, etc.), any blockchain can achieve immutability and hence its records will be credible. Thousands of independent nodes can ensure an unprecedented level of resistance to any sort of attack. Usually, it comes next after the discussion about immutability. How to correct a mistake? What if you need to change your smart contract? What if you lost your private key? There is nothing you can do retroactively — alteration in the blockchain is impossible. What’s done is done. In this regard, the DLT is usually the opposite of an alternative to blockchain. You will hear that DLTs can be designed so that those who control the network verify transactions on entry and therefore, non-compliant transactions are not allowed to pass through. But it would be a fallacy to think that censorship in the network will ultimately exclude all mistakes and unwanted transactions. There will always be a chance for a mistake.


Can blockchain technology fill the trust gap for your business?

The extensive documentation, verified by third party brokers, that has underpinned trading and commercial agreements in the past is at odds with digital ways of working. The same steps of these processes need to be maintained, but conducted through digital interfaces that are more open and more complex.Distributed Ledger Technologies (DLT) can fill this gap. Distributed ledger describes the approach of creating equal decentralized copies of transactions, instead of storing them in one central place (ie a database for digital, or a document for analogue). What makes DLT so exciting and relevant is that it was conceived and developed for this decentralized digital world where trust is at a premium. Instead of being built on existing relationships, trust can be anchored in encrypted processes (the so-called consensus algorithms), which control the transactions. It's not simply a case of storing the information safely that creates trust, it's also how it's collected. DLT can determine the conditions under which nodes of the decentralized infrastructure capture and record new transactions and when they do not.


Achieving New Levels of Resilience Through Use of Cloud-Based Software and Agile Ways of Working

In general, agile teams work with robust methods and practices across different groups and their ecosystem. Tools-driven approach and automated engineering enable building a continuous and connected ecosystem where captured feedback and user behavior are analyzed and actioned. Automated engineering helps in making and delivering a better customer experience for the users. Digital-first does not work in silos; it builds products and platforms to connect and create an ecosystem. Traditionally, we dealt with effort, counts, rollback, monthly release, etc.; under the guise of agile, KPIs were to suit the management communication pattern and reporting. Modern-day engineering focuses on the outcome. Failure is noticed and fixed rapidly, but how quickly and relative improvements are the real questions. In this ecosystem, the end customer sees the change immediately. The measurement of success of the ecosystem has several performance indicators like MTTX, lead time /cycle time, deployment rates, etc., on the development side. 



Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford

Daily Tech Digest - October 01, 2021

6 steps for third-party cyber risk management

Classify vendors based on the inherent risk they pose to the organization (i.e., risk that doesn’t take into account existing mitigations). To do this, create a scoping questionnaire that can be completed by the employee who owns the vendor relationship to capture vital information regarding the service being offered, the location and level of data being accessed, stored or processed, and other factors that indicate what kind of security assessment may be needed. Every vendor presents a different level of risk. For example vendors that provide critical services usually have access to sensitive information and therefore pose a larger threat to the organization. This is where a vendor risk questionnaire comes in. You can develop your own or use one of the templates available online. In certain cases your organization may be required to comply with standards like SOC2 Type 2, ISO 27001, NIST SP 800-53, NIST CSF, PCI-DSS, CSA CCM, etc. It’s also important that your questionnaire covers questions related to such frameworks and compliance requirements.


Incentivizing Developers is the Key to Better Security Practices

To help development teams improve their cybersecurity prowess, they must first be taught the necessary skills. Utilizing scaffolded learning, and tools like Just-in-Time (JiT) training can make this process much less painful, and helps to build upon existing knowledge in the right context. The principle of JiT is that developers are served the right knowledge at just the right time, for example, if a JiT developer training tool detects that a programmer is creating an insecure piece of code, or is accidentally introducing a vulnerability into their application, it can activate and show the developer how they could fix that problem, and how to write more secure code to perform that same function in the future. With a commitment to upskilling in place, the old methods of evaluating developers based solely on speed need to be eliminated. Instead, coders should be rewarded based on their ability to create secure code, with the best developers becoming security champions that help the rest of the team improve their skills. 


The Turbulent Past And Uncertain Future Of AI

Although deep-learning systems tend to be black boxes that make inferences in opaque and mystifying ways, neuro-symbolic systems enable users to look under the hood and understand how the AI reached its conclusions. The U.S. Army is particularly wary of relying on black-box systems, as Evan Ackerman describes in "How the U.S. Army Is Turning Robots Into Team Players," so Army researchers are investigating a variety of hybrid approaches to drive their robots and autonomous vehicles. Imagine if you could take one of the U.S. Army's road-clearing robots and ask it to make you a cup of coffee. That's a laughable proposition today, because deep-learning systems are built for narrow purposes and can't generalize their abilities from one task to another. What's more, learning a new task usually requires an AI to erase everything it knows about how to solve its prior task, a conundrum called catastrophic forgetting. At DeepMind, Google's London-based AI lab, the renowned roboticist Raia Hadsell is tackling this problem with a variety of sophisticated techniques.


Increase Your DevOps Productivity Using Infrastructure as Low Code

Often what people focus on around DevOps is the tooling element as this often leads people down the continuous integration and continuous delivery route, aka CI/CD. One of the most popular open-source CI/CD tools is Jenkins, which is an all-in-one automation server that brings together the various parts of the software development life cycle. There are endless tools available on the market that can fit into your DevOps processes and cover virtually any technology stock you can think of these days. As Jenkins is one of the most popular, let’s take a look at some of the pros and cons in comparison with other infrastructures as low code tools. With Jenkins being open source, this gives you full control over the platform and what you do with it. Unfortunately this also puts all the responsibility onto yourself to make sure it’s doing what it should be doing. Starting at the infrastructure level, this is something you have to host yourself, which naturally comes with an associated cost for the underlying resource.


Russian Scientists Use Supercomputer To Probe Limits of Google’s Quantum Processor

From the early days of numerical computing, quantum systems have appeared exceedingly difficult to emulate, though the precise reasons for this remain a subject of active research. Still, this apparently inherent difficulty of a classical computer to emulate a quantum system prompted several researchers to flip the narrative. Scientists such as Richard Feynman and Yuri Manin speculated in the early 1980s that the unknown ingredients which seem to make quantum computers hard to emulate using a classical computer could themselves be used as a computational resource. For example, a quantum processor should be good at simulating quantum systems, since they are governed by the same underlying principles. Such early ideas eventually led to Google and other tech giants creating prototype versions of the long-anticipated quantum processors. These modern devices are error-prone, they can only execute the simplest of quantum programs and each calculation must be repeated multiple times to average out the errors in order to eventually form an approximation.


The Eclat algorithm

In this article, you will learn everything that you need to know about the Eclat algorithm. Eclat stands for Equivalence Class Clustering and Bottom-Up Lattice Traversal and it is an algorithm for association rule mining (which also regroups frequent itemset mining). Association rule mining and frequent itemset mining are easiest to understand in their applications for basket analysis: the goal here is to understand which products are often bought together by shoppers. These association rules can then be used for example for recommender engines (in case of online shopping) or for store improvement for offline shopping. The ECLAT algorithm is not the first algorithm for association rule mining. The foundational algorithm in the domain is the Apriori algorithm. Since the Apriori algorithm is the first algorithm that was proposed in the domain, it has been improved upon in terms of computational efficiency (i.e. they made faster alternatives). There are two faster alternatives to the Apriori algorithm that are state-of-the-art: one of them is FP Growth and the other one is ECLAT.


Why Coding Interviews Are Getting So Hard?

If candidates are sheep, then interviewers are wolves. The sheep learn to run faster and faster because they want to survive, and so are the wolves. Years ago, there weren’t any interview practice materials. New grads would review their Data Structure and Algorithm textbooks to prepare for coding interviews. And we would turn to senior students who have been through some interview process to pick up some wisdom. ... If you are an interviewer, try to avoid problems that are easily available on the internet or at least tweak them before using them. Try to avoid problems that clearly require practicing, i.e., dynamic programming. Try to focus less on whether a problem is solved perfectly but instead pay more attention to how candidates think and approach the problem. If you are a candidate, prepare for the interviews as hard as you can! Frankly speaking, that may not be the best way to use your time. But you need to do what you need to do. And after the interview, don’t share the problems. The world is big and pretty diverse. The discussions above are based on my very limited experience. And they might be wrong in a different context.


For networking pros, every month is Cybersecurity Awareness Month

Not sure why the organizers didn’t make “Cybersecurity First” the theme of the month’s first week, but it is not for me to second-guess the federal Cybersecurity & Infrastructure Security Agency (CISA) and the public/private National Cyber Security Alliance (NCSA), organizers of the annual awareness month. NCSAM is a great idea, just as is Bat Appreciation Month, Church Library Month, and International Walk to School Month, all of which also occur in October. It’s always good to be reminded that precautions and safeguards are needed when navigating a sometimes dangerous digital world. And that walking to school benefits students physically and mentally. For enterprise professionals, of course, every month is Cybersecurity Awareness Month. Security constantly is on the minds of enterprise IT pros, if not the minds of enterprise workers (sore subject!). And well it should be, coming off a year described by the CrowdStrike 2021 Global Threat Report as “perhaps the most active year in memory.”


Cloud computing in manufacturing: from impossible to indispensable

Advancements in infrastructure, combined with the exponential growth of software offerings in the cloud, has accelerated the digitisation of the supply chains, allowing companies to operate and interact with each other in a more transparent and automated way. Companies are quickly expanding their operational intelligence, moving from single assets descriptive analytics – where manufacturers are informed of what has happened; to prescriptive analytics – where manufacturers are informed of options to respond to what’s about to happen; across multiple lines, factories, all the way to critical elements of their supply chain. The exponential value creation cycle enabled by the Cloud Continuum does not depend on IT only. It requires organisations to have a well-defined vision, an adequate operating model, and a properly designed set of technology adoption principles. The adoption of cloud solutions without these three components usually leads to difficulty scaling and sustaining the intended benefits. In summary, cloud adoption in manufacturing went from a concept deemed impossible


Today’s cars are mobile data centers, and that data needs to be protected

The utopian vision of the AV paradigm removing the stress of having to pilot the vehicle, improving road safety, and managing urban traffic flows has already given rise to what manufacturers are referring to as the “passenger economy”. While we are chauffeured by software, we will be able to work, shop, and play from the comfort of our seats within continuous network connectivity. Independent of our own data demand, our vehicles will also be communicating and receiving sensor and telemetry data with other vehicles to avoid collisions, with our smart cities to ensure an efficient journey time, and with the manufacturer to schedule maintenance and contribute to the next generation of car design. All this critical data, however, could form the basis of a dystopian nightmare. Compromised applications might disable the software controlling safety systems on which AVs will depend. Knowledge of the driver’s identity, social media streams, and location might proliferate an avalanche of targeted advertising from local services, a loss of privacy, and potentially compromised personal safety. 



Quote for the day:

"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward

Daily Tech Digest - September 30, 2021

How IBM lost the cloud

At first, Genesis still lacked support for the key virtual private cloud technology that both engineers and salespeople had identified as important to most prospective cloud buyers. This caused a split inside IBM Cloud: A group headed by the former Verizon executives continued to work on the Genesis project, while another group, persuaded by a team from IBM Research that concluded Genesis would never work, began designing a separate infrastructure architecture called GC that would achieve the scaling goals and include the virtual private cloud technology using the original SoftLayer infrastructure design. Genesis would never ship. It was scrapped in 2017, and that team began work on its own new architecture project, internally called NG, that ran in parallel to the GC effort. For almost two years, two teams inside IBM Cloud worked on two completely different cloud infrastructure designs, which led to turf fights, resource constraints and internal confusion over the direction of the division.


How Can Artificial Intelligence Transform Software Testing?

The increase of automated testing has coincided with the acceptance of agile methodologies in software development. This allows the QA specialists group to deliver error-free and robust software in small batches. Manual test is restricted to business acceptance test merely. DevOps test along with Automation helps agile groups to ship a guaranteed product for SaaS/ cloud deployment through a Continuous Integration/ Continuous Delivery pipeline. In software testing, Artificial Intelligence is a blend of machine learning, cognitive automation, reasoning, analytics, and natural language processing. Cognitive automation leverages several technological approaches such as data mining, semantic technology, text analytics, machine learning, and natural language processing. For instance, Robotic Process Automation (RPA) is one such connecting link between Artificial Intelligence and Cognitive Computing.


GriftHorse Money-Stealing Trojan Takes 10M Android Users for a Ride

The creators of the apps have employed several novel techniques to help the apps stay off the radar of security vendors, the analysis found. In addition to the no-reuse policy for URLs mentioned above, the cybercriminals are also developing the apps using Apache Cordova. Cordova allows developers to use standard web technologies – HTML5, CSS3 and JavaScript – for cross-platform mobile development – which in turn allows them to push out updates to apps without requiring user interaction. “[This] technology can be abused to host the malicious code on the server and develop an application that executes this code in real-time,” according to Zimperium. “The application displays as a web page that references HTML, CSS, JavaScript and images.” The campaign is also supported with a sophisticated architecture and plenty of encryption, which makes detection more difficult, according to the researchers. For instance, when an app is launched, the encrypted files stored in the “assets/www” folder are decrypted using AES.


How to Decide in Self-Managed Projects - a Lean Approach to Governance

If the people in the project can make decisions themselves, we can call it self-managed. By “self-managed” (or self-organized), I mean that the project members can make decisions about the content of the work, and also who does what and by when. Self-managed groups have the advantage that those who do the work are closer to the decisions: decisions are better grounded in operations, and there is more buy-in and deeper insight from those who are going to carry out the tasks into how the tasks fit into the bigger picture. One step further would be to have a self-governed project. ... The trick is to use lean governance, intentionally and in our favor. The goal of governance in a new project is to provide just enough structure to operate well. Just enough team structure to have a clear division of labor. Just enough meeting structure to use our time well. Not more but also not less. That level of “just enough,” of course, depends on the phase of the project.


Citi’s big idea: central, commercial banks use shared DLT for “digital money format war”

The concept involves creating a blockchain with tiers and partitions, on which central banks perform the same current role dealing with commercial banks. On the same ledger, commercial banks and emoney providers perform similar activities as they do now with their clients. Given this is how things work today and most legislation is technology agnostic, it likely wouldn’t require legislative changes and may dispense with the need for CBDCs. In Mclaughlin’s view, the debates around central bank digital currency (CBDC) frame the conversation as public versus private money. An alternative perspective is to look at regulated versus unregulated money. The concept also addresses bank coins or settlement tokens. “If we as commercial banks think that the right thing to do is for each of us to create our own coins, again, the regulated sector will be fragmented. And that will not help in the contest between regulated money and non-regulated money,” said Mclaughlin. Central bank money, commercial bank money and emoney are all regulated and represent specific legal liabilities, no matter their technical form.


Major Quantum Computing Strategy Suffers Serious Setbacks

The key to quantum computing is that, during the computation, you must avoid revealing what information your qubits encode: If you look at a bit and say that it holds a 1 or a 0, it becomes merely a classical bit. So you must shield your qubits from anything that could inadvertently reveal their value. (More strictly, decide their value — for in quantum mechanics this only happens when the value is measured.) You need to stop such information from leaking out into the environment. That leakage corresponds to a process called quantum decoherence. The aim is to carry out quantum computing before decoherence can take place, since it will corrupt the qubits with random errors that will destroy the computation. Current quantum computers typically suppress decoherence by isolating the qubits from their environment as well as possible. The trouble is, as the number of qubits multiplies, this isolation becomes extremely hard to maintain: Decoherence is bound to happen, and errors creep in.


Apple Pay-Visa Vulnerability May Enable Payment Fraud

The vulnerabilities were detected in iPhone wallets where Visa cards were set up in "express transit mode," the researchers say. The transit mode feature, launched in May 2019, enables commuters to make contactless mobile payments without fingerprint authentication. Threat actors can use the vulnerability to bypass the Apple Pay lock screen and illicitly make payments using a Visa card from a locked iPhone to any contactless Europay, Mastercard and Visa - or EMV - reader, for any amount, without user authorization, the researchers say. Information Security Media Group could not immediately ascertain the number of users affected by this vulnerability. "The weakness lies in the Apple Pay and Visa systems working together and does not affect other combinations, such as Mastercard in iPhones, or Visa on Samsung Pay," the researchers note. The researchers, who come from the University of Birmingham’s School of Computer Science and the University of Surrey’s Department of Computer Science, found the flaw as part of a project dubbed TimeTrust.


How much trust should we place in the security of biometric data?

Whilst the collection of fingerprint data is very convenient for the border control forces, how convenient is it for the asylum seekers themselves? Could they be opening themselves up to greater risks by providing their data? A potential issue here is the amount of trust that people place in fingerprints. People assume that fingerprints are an infallible method of identification. Whilst the chance of two people having matching fingerprints is infinitesimally small, automated matching systems often do not make use of the entire fingerprint. Different levels of detail can be used in matching, with differing levels of reliability. When asked to provide your fingerprints for identification purposes, how often do we consider how the matching is performed? Whilst standards exist for the robustness of fingerprint matching when used within the Criminal Justice System, can we assume that the same standards apply to border control systems? Generally, the fewer comparison points to be analyzed, the faster the matching system; in a border control situation where a large quantity of people are being processed, it is important to understand how much of a trade-off between speed and accuracy has occurred. 


The New Security Basics: 10 Most Common Defensive Actions

The current assessments found that the growing number of public incidents of ransomware attacks and attacks on the software supply chain, such as the compromise of remote management software maker Kaseya, have companies more focused on activities designed to prevent or mitigate incidents. Over the past two years, 61% more companies have actively sought to identify open source — 74 this year versus 46 two years ago — while 55 companies have begun to mandate boilerplate software license agreements, an increase of 57% compared with two years ago. "Over the last 18 months, organizations experienced a massive acceleration of digital transformation initiatives," said Mike Ware, information security principal at Navy Federal Credit Union, a member organization of the BSIMM community, in a statement. "Given the complexity and pace of these changes, it's never been more important for security teams to have the tools which allow them to understand where they stand and have a reference for where they should pivot next."


Cycle Time Breakdown: Reducing Pull Request Pickup Time

There’s nothing worse than working hard for a day or two on a difficult piece of code, creating a pull request for it, and having no one pay attention or even notice. It’s especially frustrating if you specifically assign the Pull Request to a teammate. It’s a bother to have to remember to send emails or slack messages to fellow team members to get them to do a review. No one wants to be a distraction, but the work has to be done, right? So naturally, the conscientious Dev Manager will want to pay close attention to Pull Request Pickup Time (PR Pickup Time), the second segment of a project’s journey along the Cycle Time path. (Go here for the blog post about the first segment, Coding Time) She’ll want to make sure those frustrations described above don’t occur. Keeping Cycle Time “all green” is the goal, but this is often difficult because there are a lot of moving parts that go into managing Cycle Time, including PR Pickup Time.



Quote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward

Daily Tech Digest - September 29, 2021

Approaching Anomaly Detection in Transactional Data

Usually, people mean financial transactions when they talk about transactional data. However, according to Wikipedia, “Transactional Data is data describing an event (the change as a result of a transaction) and is usually described with verbs. Transaction data always has a time dimension, a numerical value and refers to one or more objects”. In this article, we will use data on requests made to a server (internet traffic data) as an example, but the considered approaches can be applied to most of the datasets falling under the aforementioned definition of transactional data. Anomaly Detection, in simple words, is finding data points that shouldn’t normally occur in a system that generated data. Anomaly detection in transactional data has many applications, here are a couple of examples: Fraud detection in financial transactions; Fault detection in manufacturing; Attack or malfunction detection in a computer network (the case covered in this article); Recommendation of predictive maintenance; and Health condition monitoring and alerting.


Apache Kafka: Core Concepts and Use Cases

The initial point that each and every individual who works with streaming applications ought to comprehend is the concept, which is a diminutive piece of data. For instance, when a user registers within the system, an event is created. You can likewise ponder on an event like a message with data, which can be processed and saved at a certain place if at all required. This event is the message wherein the data regarding details such as the user’s name, email, password, and so forth can be added. This highlights that Kafka is the platform that works well when it comes to streaming events. Events are continually composed by producers. They are called producers since they compose events or data to Kafka. There are numerous sorts of producers. Instances of clients include web servers, parts of applications, whole applications, IoT gadgets, monitoring specialists, and so on. A new user registration event can be produced by the component of the site that is liable for client registrations. 


How to Build a Regression Testing Strategy for Agile Teams

Regression testing is a process of testing the software and analyzing whether the change of code, update, or improvements of the application has not affected the software’s existing functionality. Regression testing in software engineering ensures the overall stability and functionality of existing features of the software. Regression testing ensures that the overall system stays sustainable under continuous improvements whenever new features are added to the code to update the software. Regression testing helps target and reduce the risk of code dependencies, defects, and malfunction, so the previously developed and tested code stays operational after the modification. Generally, the software undergoes many tests before the new changes integrate into the main development branch of the code. ... Automated regression testing is mainly used with medium and large complex projects when the project is stable. Using a thorough plan, automated regression testing helps to reduce the time and efforts that a tester spends on tedious and repeatable tasks and can contribute their time that requires manual attention like exploratory tests and UX testing.


Sam Newman on Information Hiding, Ubiquitous Language, UI Decomposition and Building Microservices

The ubiquitous language in many ways is the key stone of domain-driven design and it's amazing how many people skip it, and it's foundational. I think a lot of the reason that people skip ubiquitous language is because to understand what terms and terminology are used by the business side of your organization by the use of your software, it involves having to talk to people. It still stuns me how many enterprise architects have come up with a domain model by themselves without ever having spoken to anybody outside of IT. So this fundamentally, the ubiquitous language starts with having conversations. This is why I like event storming as a domain-driven design technique because it places primacy on having that kind of collective brainstorming activity where you get sort of maybe your non-developer, your non-technical stakeholders in the room and listen to what they're talking about and you're picking up their terms, their terminology, and you're trying to put those terms into your code.


Technical architecture: What IT does for a living

Technical architecture is the sum and substance of what IT deploys to support the enterprise. As such, its management is a key IT practice. We talked about how to go about it in a previous article in this series. Which leads to the question, What constitutes good technical architecture? Or more foundationally, What constitutes technical architecture, whether good, bad, or indifferent? In case you’re a purist, we’re talking about technical architecture, not enterprise architecture. The latter includes the business architecture as well as the technical architecture. Not that it’s possible to evaluate the technical architecture without understanding how well it supports the business architecture. It’s just that managing the health of the business architecture is Someone Else’s Problem. IT always has a technical architecture. In some organizations it’s deliberate, the result of processes and practices that matter most to CIOs. But far too often, technical architecture is accidental — a pile of stuff that’s accumulated over time without any overall plan.


Preparing for the 'golden age' of artificial intelligence and machine learning

"Implementing an AI solution is not easy, and there are many examples of where AI has gone wrong in production," says Tripti Sethi, senior director at Avanade. "The companies we have seen benefit from AI the most understand that AI is not a plug-and-play tool, but rather a capability that needs to be fostered and matured. These companies are asking 'what business value can I drive with data?' rather than 'what can my data do?'" Skills availability is one of the leading issues that enterprises face in building and maintaining AI-driven systems. Close to two-thirds of surveyed enterprises, 62%, indicated that they couldn't find talent on par with the skills requirements needed in efforts to move to AI. More than half, 54%, say that it's been difficult to deploy AI within their existing organizational cultures, and 46% point to difficulties in finding funding for the programs they want to implement. ... In recent months and years, AI bias has been in the headlines, suggesting that AI algorithms reinforce racism and sexism. 


Skilling in the IT sector for a post pandemic era – An Experts View

“When there’s a necessity, innovations follow,” said Mahipal Nair (People Development & Operations Leader, NielsenIQ). The company moved from people-interaction-dependent learning to digital methods to navigate skilling priorities. As consumer expectations change, leadership and social skills have become a priority for workplace performance. “The way to solve this is not just to transform current talent, but create relevant talent,” said Nilanjan Kar (CRO, Harappa). Echoing the sentiment, Kirti Seth (CEO, SSC NASSCOM) added that “learning should be about principles, and it should enable employees to make the basics their own.” This will help create a learning organization that can contextualize change across the industry to stay relevant and map the desired learning outcomes. While companies upskill their workforce on these priorities, the real question is what skills will be required? Anupal Banerjee (CHRO, Tata Technologies) noted that “with the change in skills, there are multiple levels to focus on. While one focus area is on technical skills, the second is on behavioral skills. ...”.


Re-evaluating Kafka: issues and alternatives for real-time

By nature, your Kafka deployment is pretty much guaranteed to be a large-scale project. Imagine operating an equally large-scale MySQL database that is used by multiple critical applications. You’d almost certainly need to hire a database administrator (or a whole team of them) to manage it. Kafka is no different. It’s a big, complex system that tends to be shared among multiple client applications. Of course it’s not easy to operate! Kafka administrators must answer hard design questions from the get-go. This includes defining how messages are stored in partitioned topics, retention, and team or application quotas. We won’t get into detail here, but you can think of this task as designing a database schema, but with the added dimension of time, which multiplies the complexity. You need to consider what each message represents, how to ensure it will be consumed in the proper order, where and how to enact stateful transformations, and much more — all with extreme precision.


Climbing to new heights with the aid of real-time data analytics

Enter hybrid analytics. The world of data management has been reimagined with analytics at the speed of transactions made possible, through simpler processes, and a single hybrid system breaking down the walls between transactions and analytics. It’s possible through hybrid analytics to avoid the movement of information from databases to data warehouses and allow simple real-time data processing. This innovation enables enhanced customer experiences and a more data-driven approach to decision making thanks to the deeper business insights delivered through a hybrid system. Thanks to hybrid analytics, real-time allows a faster time to insight. It’s also possible for businesses to better understand their customers with no long, complex processes while the feedback loop is also made shorter for increased efficiency. It’s this approach that delivers a data-driven competitive advantage for businesses. Both developers and database administrators can access and manage data far easier, only having to deal with one connected system with no database sprawl.


Why DevSecOps fails: 4 signs of trouble

When Haff says that some organizations make the mistake of not giving DevSecOps its due, he adds that the people and culture component is most often the glaring omission. Of course, it’s not actually “glaring” until you realize that your DevSecOps initiative has fallen flat and you start to wonder why. One way you might end up traveling this suboptimal path: You focus too much on technology as the end-all solution rather than a layer in a multi-faceted strategy. “They probably have adopted at least some of the scanning and other tooling they need to mitigate various types of threats. They’re likely implementing workflows that incorporate automation and interactive development,” Haff says. “What they’re less likely paying less attention to – and may be treating as an afterthought – is people and culture.” Just as DevOps was about more than a toolchain, DevSecOps is about more than throwing security technologies at various risks. “An organization can get all the tools and mechanics right but if, for example, developers and operations teams don’t collaborate with your security experts, you’re not really doing DevSecOps,” Haff says.



Quote for the day:

"Authentic leaders are often accused of being 'controlling' by those who idly sit by and do nothing" --John Paul Warren