Daily Tech Digest - July 08, 2021

Security Problems Worsen as Enterprises Build Hybrid and Multicloud Systems

"Most organizations have had a cloud attack surface for years and didn't know about it," said Andrew Douglas, managing director in cybersecurity at Deloitte & Touche. "They were using SaaS applications and piloting different cloud providers going back a decade." Companies are under increasing pressure to move to the cloud, which the pandemic has accelerated. While security technologies are becoming more comprehensive and robust, IT managers are tempted to skip over the security planning steps and jump straight into putting new solutions into production. "There's been a lot of temptation to move quickly," said Douglas. "Our clients are trying to accelerate their organizations' move to the cloud, but whether they put in the time and investment in implementing security – well, that has been lagging." The biggest challenge faced by those that do want to invest in security planning upfront is getting an accurate view of all their assets. "What do we have out there? What are we spinning out on a daily basis? What are the subscriptions we have in the cloud? What infrastructure as code? What serverless functionality? ..."


What Colonial Pipeline Means for Commercial Building Cybersecurity

Smart buildings are particularly vulnerable to cyberattacks as more Internet of Things devices are deployed and the use of remote management tools increases. While IT systems are typically focused on the core security triad of confidentiality, integrity, and availability of information, the BMS security triad is different. The BMS focus should be on the availability of operational assets, integrity/reliability of the operational process, and confidentiality of operational information. The deployment of a multidisciplinary defense approach across system levels requires a cost-benefit balanced focus on operations, people, and technology. Managing cyber-risks starts with organizational governance and executive-level commitments. This can include developing a cybersecurity strategy with a defined vision, goals, and objectives, as well as metrics, such as the number of building control system vulnerability assessments completed. In addition, senior leadership needs to ensure that the right technologies are procured and deployed, defenses are deployed in layers, access to the BMS via the IT network is limited as much as possible, and detection intrusion technologies are deployed.


Getting the board on board: a cost-benefit analysis approach to cyber security

A cost-benefit analysis is a method used to evaluate a project by comparing its losses and gains — essentially a quantified and qualified list of pros and cons. Undertaking a cost-benefit analysis is a great way to assess projects because it reduces the evaluation complexity to a single figure. As you can imagine, this makes a cost-benefit analysis an invaluable tool when it comes to explaining the intricacies and selling the value of a robust cyber security strategy to your board. One of the most important things to emphasise in your cost-benefit analysis is the trade-off between paying to prevent a mess versus paying to clean up a mess. A recent Cabinet Office report stated the estimated cost of cyber crime to the UK economy is a whopping £27 billion. And when it comes to individual attacks, a Sophos survey in April 2021 found that the average total cost of recovery from a ransomware attack has more than doubled in a year, increasing from $761,106 in 2020 to $1.85 million in 2021. Of course, investing in preventative cyber security measures also comes at a cost. 


OQC Delivers the UK’s first Quantum Computing as-a-Service

OQC’s core innovation, the Coaxmon, solves these challenges using a three-dimensional architecture that moves the control and measurement wiring out of plane and into a 3D configuration. This vastly simplifies fabrication, improving coherence and – crucially – boosting scalability. This key advantage underpins the company’s confidence in its strategy to “build the core and partner with the best”. Just four years after it was founded, having attracted nearly £2m of UK government support and some of the leading scientists and engineers in the field, the pre-Series A startup is now a leader in the “noisy intermediate-scale quantum” (NISQ) era of quantum computing. Yet OQC is doing so with a fundamental advantage when it comes to scaling up to future generations of quantum machines. This radical design innovation and its proven effectiveness so far is driving the company in its mission to help its customers explore the possibilities of quantum advantage. It is also a great example of the value of the National Quantum Technologies Programme in supporting excellent research and the growth of start-ups helping to create a vibrant UK quantum sector.


How I avoid breaking functionality when modifying legacy code

If you were to leave the code as it was written during the first pass (i.e., long functions, a lot of bunched-up code for easy initial understanding and debugging), it would render IDEs powerless. If you cram all capabilities an object can offer into a single, giant function, later, when you're trying to utilize that object, IDEs will be of no help. IDEs will show the existence of one method (which will probably contain a large list of parameters providing values that enforce the branching logic inside that method). So, you won't know how to really use that object unless you open its source code and read its processing logic very carefully. And even then, your head will probably hurt. Another problem with hastily cobbled-up, "bunched-up" code is that its processing logic is not testable. While you can still write an end-to-end test for that code (input values and the expected output values), you have no way of knowing if the bunched-up code is doing any other potentially risky processing. Also, you have no way of testing for edge cases, unusual scenarios, difficult-to-reproduce scenarios, etc. That renders your code untestable, which is a very bad thing to live with.


Regulating digital transformation in Saudi Arabia

Due to the rapidly changing situation, it is necessary to regulate all matters related to digitization and digitalization processes. Therefore, the Kingdom launched the Digital Government Authority (DGA) to serve as an umbrella organization for all digital matters in Saudi Arabia. In today’s article, we shall highlight the scope and powers of the authority. First of all, it is important to get acquainted with the definition of digitization, as knowing the basic concepts from the regulator’s point of view will certainly correct our understanding of all relevant terms and determine their actual purpose. We will start with the most popular term, which is digital transformation. It means transforming and developing business models strategically to become digital models based on data, technology, and communication networks. Hence the role of digital government in supporting administrative, organizational, and operational processes within and between government entities to achieve digital transformation, develop, improve and enable easy and effective access to government information and services. 


White boxes in the enterprise: Why it’s not crazy

The cultural problem is simple; your current network vendor will resent your white-box decision and will likely blame every hiccup or fault you experience on the new gear. When that happens, and you go to your white-box vendor to get their side, they may well blame everything on the software, or they may point the finger back at the proprietary vendor. The software supplier will return the favor by pointing at everyone, and all the players may point at your own integration efforts as the source of the problem. If you had finger-pointing in a two-vendor network, white boxes can make that look like a love feast by comparison. The technical problem is one of management. All network devices have to be managed, and the management systems and practices most enterprises use tend to be tuned to their current devices. White-box management is usually set by the software, and you can’t necessarily expect much of a choice in how the management features work. That means your current network-operations people will have to contend with multiple management choices depending on the devices they use.


Augmenting Organizational Agility Through Learnability Quotient

Architects have an important role to play as well in achieving organizational agility. There are articles and books which talk about achieving the stability of the tech organization via design patterns, reference architectures, best practices, leveraging checklists, etc. One good example is Fundamentals of Software Architecture, by Mark Richards and Neal Ford. To instill and improve the dynamic capability of the organization, technical directions alone are not enough. The architect’s vision and strategy for the organization can only be achieved if the organization is capable of executing with the right knowledge. With ever-changing and evolving tech, having social capital across the organization is vital. More so in growing startups where the organization’s dynamics keep changing rapidly. Focusing efforts on the learnability of the organization by building a learning community and having a learning culture across the organization is crucial and architects become the linchpin for that. Matthew Skelton and Manuel Pais summarize perfectly well: Modern architects must be sociotechnical architects; focusing purely on technical architecture will not take us far.


Ways to Cultivate a Cyber-Aware Culture

Everyone's vulnerable to phishing scams, from the receptionist to the CEO. No one is exempt. If you think you're immune because you have a better grasp on security than everyone else, well, that's not how it works. Security must be everyone's job. The best way to secure a business is to start at the top. A cyber aware culture involves cooperation between departments and ongoing education for all employees, irrespective of how high they are up at the hierarchy. Whether you're filling out a security awareness questionnaire or writing your organization's next policy document, focusing on the following three elements will help you stay true to your cyber aware culture. ... When we talk of a cyber-aware culture, enterprises need to understand that there's more to it than technology. It's about people, processes, culture, and engagement. Security leaders need to take a holistic approach to cyber risk management. The only way to steer clear of cyber fines and compromised customers is by involving employees. Empower your people so they become part of the solution instead of a liability.


Back-office bank UX: the lessons to learn from the Citi-Revlon tale

Any company undertaking digitisation must have a clear understanding of the key service, or services, it provides to end-customers. This should be their first port of call. It is then best practice to map the processes involved in delivering those services, including all people and systems. By gathering this information, technology teams will encounter employees that inhabit distinct roles across lifecycles, including subject experts, business leaders, and end-customers. The goal is to gain a near-complete understanding of the existing landscape from employees. This can then be manifested in a service blueprint, a journey map, or a process framework. The organisation should anticipate that different outputs or levels of depth may result, depending on the scope and scenario. While it may seem like many organisations, especially within banking, would already have this institutional knowledge, we have learned that this type of knowledge usually exists in small pockets, not across entire organisational lines. Gaining some understanding of the existing process across the organisation will therefore help clarify opportunities to converge siloed processes, increase efficiency, improve communications, and drive to any other pre-defined business objectives.



Quote for the day:

"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis

Daily Tech Digest - July 07, 2021

Mind the gap – how close is your data to the edge?

A lesser talked about technology might be the key to unlocking the potential of technologies like SD-WAN, 5G, edge computing, and IoT. Network Functions Virtualisation (NFV) hosting pushes the limits of modern networks by allowing companies to create an on-demand virtual edge to their networks, closing that gap between an enterprise’s network and the various edge devices where transactions and/or data from mission-critical applications are created. Having NFV as part of your SD-WAN and cloud connectivity strategy enables traffic to make the smallest numbers of hops on the public internet before it passes through a private, secure software-defined network (SDN). In other words, you shorten that first mile, and optimise the middle- and last-mile by having your data travel safely and quickly over a private connection. Think about every time you tap your card at a PoS terminal on a shopping trip or a night out. To process this transaction, your financial details need to travel to a central point and back again. How long do you want those details traveling across public infrastructure? With a virtual edge, this distance can be significantly minimised, providing predictable, secure connectivity.


Discovering Symbolic Models From Deep Learning With Inductive Biases

Symbolic Model Framework proposes a general framework to leverage the advantages of both traditional deep learning and symbolic regression. As an example, the study of Graph Networks (GNs or GNNs) can be presented, as they have strong and well-motivated inductive biases that are very well suited to complex problems that can be explained. Symbolic regression is applied to fit the different internal parts of the learned model that operate on a reduced size of representations. A Number of symbolic expressions can also be joined together, giving rise to an overall algebraic equation equivalent to the trained Graph Network. The framework can be applied to more such problems as rediscovering force laws, rediscovering Hamiltonians, and a real-world astrophysical challenge, demonstrating that drastic improvements can be made to generalization, and plausible analytical expressions are being made distilled. Not only can it recover the injected closed-form physical laws for the Newtonian and Hamiltonian examples, but it can also derive a new interpretable closed-form analytical expression that can be useful in astrophysics.


Is Security An Illusion? How A Zero-Trust Approach Can Make It A Reality

One of the first steps to implementing zero-trust measures is to find an infrastructure and website security partner that specializes in zero trust and can provide consultation and solutions. Using a partner like this can enable an easy implementation across your company’s environment and systems. Look for a partner that can identify segments and microsegments important to your organization and ensure that security measures are implemented company-wide. This includes thinking about the IT environment in segments and microsegments, which include data like PII or customer data as separate areas that need to be accessed independently. For example, if your organization has implemented a third-party application or applications that support HR functions (including payroll, employee information and more), these partners can help ensure that individuals with access to one of the components (or segments) will not be able to access any of the other segments without separate authorization. Multifactor authentication (MFA) is also a core component of zero-trust implementation.


Common Linux vulnerabilities admins need to detect and fix

Perhaps, from a security perspective, the most important individual piece of software running on a consumer PC is your web browser. Because it's the tool you use most to connect to the internet, it's also going to be the target of the most attacks. It's therefore important to make sure you've incorporated the latest patches in the release version you have installed and to make sure you're using the most recent release version. Besides not using unpatched browsers, you should also consider the possibility that, by design, your browser itself might be spying on you. Remember, your browser knows everywhere on the internet you've been and everything you've done. And, through the use of objects like cookies (data packets web hosts store on your computer so they'll be able to restore a previous session state), complete records of your activities can be maintained. Bear in mind that you "pay" for the right to use most browsers -- along with many "free" mobile apps -- by being shown ads. Consider also that ad publishers like Google make most of their money by targeting you with as for the kinds of products you're likely to buy and, in some cases, by selling your private data to third parties.


Agile Methodology Finally Infiltrates the C-Suite

What started as a snowball slowly gathered mass and speed, launching a genuine agile revolution. Today, that revolution has finally reached the executive suite. Agility is officially a C-level concern. In fact, a recent survey of senior business leaders, conducted by IDC and commissioned by ServiceNow, found that 90% of European CEOs consider agility critical to their company's success. That’s no surprise after a year in which the ability to react quickly and effectively to new business challenges took center stage. CEOs are increasingly aware of the success agile companies enjoy. In my experience, these organizations are well-positioned to create great customer and employee experiences, drive productivity, and attract and retain the best talent. However, rather than talking about agility as a standalone C-suite priority, I see it as a foundational enabler of these three key organizational priorities. ... When measured against five types of organizational agility, IDC found that one third of organizations sit in the lower “static” or “disconnected” tiers, while nearly half are categorized as the “in motion” middle tier of the agility journey. Just one in five (21%) have attained the top levels of “synchronized” and “agile.”


Preparing for your migration from on-premises SIEM to Azure Sentinel

Many organizations today are making do with siloed, patchwork security solutions even as cyber threats are becoming more sophisticated and relentless. As the industry’s first cloud-native SIEM and SOAR (security operation and automated response) solution on a major public cloud, Azure Sentinel uses machine learning to dramatically reduce false positives, freeing up your security operations (SecOps) team to focus on real threats. Moving to the cloud allows for greater flexibility—data ingestion can scale up or down as needed, without requiring time-consuming and expensive infrastructure changes. Because Azure Sentinel is a cloud-native SIEM, you pay for only the resources you need. In fact, The Forrester Total Economic Impact™ (TEI) of Microsoft Azure Sentinel found that Azure Sentinel is 48 percent less expensive than traditional on-premises SIEMs. And Azure Sentinel’s AI and automation capabilities provide time-saving benefits for SecOps teams, combining low-fidelity alerts into potential high-fidelity security incidents to reduce noise and alert fatigue.


Exclusive Q&A: Neuralink’s Quest to Beat the Speed of Type

Most machine learning is open-loop. Say you have an image and you analyze it with a model and then produce some results, like detecting the faces in a photograph. You have some inference you want to do, but how quickly you do it doesn't generally matter. But here the user is in the loop—the user is thinking about moving and the decoder is, in real time, decoding those movement intentions, and then taking some action. It has to act very quickly because if it’s too slow, it doesn't matter. If you throw a ball to me and it takes my BMI five seconds to infer that I want to move my arm forward—that’s too late. I’ll miss the ball. So the user changes what they’re doing based on visual feedback about what the decoder does: That’s what I mean by closed loop. The user makes a motor intent; it’s decoded by the Neuralink device; the intended motion is enacted in the world by physically doing something with a cursor or a robot arm; the user sees the result of that action; and that feedback influences what motor intent they produce next. I think the closest analogy outside of BMI is the use of a virtual reality headset—if there’s a big lag between what you do and what you see on your headset, it’s very disorienting, because it breaks that closed-loop system.


Inside Google’s Quantum AI Campus

Google Quantum AI Lab connects the different pieces of quantum computing together using open source software and Cloud APIs. Its products include open-source libraries such as Cirq, OpenFermion, and TensorFlow Quantum. Cirq is Google’s way of defining and modifying quantum circuits. It allows programmers to design and create quantum circuits, analyse it using simulators and then send it to hardware using Google’s quantum computing service API. OpenFermion, on the other hand, is a library for quantum simulations, especially quantum chemistry and electronic structure calculations. The error-corrected quantum computer will be the size of a tennis court, said Eirk Lucero, Lead Engineer, Quantum Operations and Site Lead at Google Santa Barbara. Within the quantum computer, one million qubits will operate in concert, directed by a surface code error correction. Marissa Giustina, Quantum Electronics Engineer and Research Scientist at Google, is part of the team building the cryogenic hardware that facilitates information transfer. She said the systems inside the campus look like racks of electronics operating at room temperature, and connected to a big cylinder– the dilution refrigerator, as seen below.

Scum uses an iterative approach to Product Development, which means that the team delivers usable work frequently. This small change is significant because by delivering usable work, the Scrum team is learning by doing and is not building up technical debt, or unfinished work, which will need to be completed later. By delivering a done, usable increment each Sprint, the Scrum team eliminates waste and can gather feedback on what was delivered, enabling faster learning. The graphic below by Henrik Knilberg is my favorite example of this concept. ... In the end, the team met its goal of delivering the required business functionality by the deadline. I’m confident this would have been impossible without the Scrum framework. By making decisions based on what they knew at the time and adapting to what they learned through experience, the team also became more efficient. By focusing only on the essentials, the team was able to eliminate waste. And, by delivering usable work frequently, the team was able to gain more transparency about the work that was ahead of them, eliminating the accumulation of technical debt.


Autonomous Security Is Essential if the Edge Is to Scale Properly

The emerging problem in front of us is how to deliver secure dynamic networking at this extreme scale while meeting the economic and security requirements of the various tiers of service. For instance, SD-WAN for large enterprises has a price point that allows for manual life-cycle operations, but secure networking for small and midsize business and the IIoT do not. Security policy operations are manually arbitrated in the enterprise market today, but such manual operations will never work at the future scales we're talking about. Finally, enterprise networks are relatively static, but the fully connected, smart-everything world ahead of us will feature highly dynamic, zero-trust networks at extreme scale. The upshot: For secure networking to function at the scale and price we need, it must become autonomous. When you unpack the nature of cloud-delivered secure endpoint services, you immediately discover the common limiting factor for cost, security, scale, and reliability: people. Right now, secure network operations are manually arbitrated. For example, deployment of a single secure SD-WAN endpoint takes five to nine worker-hours.



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - July 06, 2021

The future of deep learning, according to its pioneers

“Humans and animals seem to be able to learn massive amounts of background knowledge about the world, largely by observation, in a task-independent manner,” Bengio, Hinton, and LeCun write in their paper. “This knowledge underpins common sense and allows humans to learn complex tasks, such as driving, with just a few hours of practice.” Elsewhere in the paper, the scientists note, “[H]umans can generalize in a way that is different and more powerful than ordinary iid generalization: we can correctly interpret novel combinations of existing concepts, even if those combinations are extremely unlikely under our training distribution, so long as they respect high-level syntactic and semantic patterns we have already learned.” Scientists provide various solutions to close the gap between AI and human intelligence. One approach that has been widely discussed in the past few years is hybrid artificial intelligence that combines neural networks with classical symbolic systems. Symbol manipulation is a very important part of humans’ ability to reason about the world. It is also one of the great challenges of deep learning systems. Bengio, Hinton, and LeCun do not believe in mixing neural networks and symbolic AI. 


Machine Learning for Performance Management

Like whether they are likely to finish on time, or be asked to do overtime. However, again as humans, we can only process a handful of variables at any one time and we base our predictions on our past experiences. As none of us can work 24/7 the predictions of one person will likely be different to that of another. When you consider other factors such as people, differing operating procedures, machine health, raw material variability, storage and movement conditions and environmental changes such as weather, the number of variables grows and human-predictability begins to drop off. This is where the reliance on gut decisions begins to increase. Gut decisions are those where we cannot easily explain the rationale. Gut decisions are still based on experience and in fact, may be the result of combining a lot of inputs and experiences subconsciously and creating a best guess. They are not the same as a lucky guess. Therefore, you will likely find in a really experienced operator, that these gut decisions are actually pretty good. Unfortunately, the experienced workers are becoming scarce and the ones we do have are far too useful to be staring at trends all day!


How Business Leaders Can Foster an Innovative Work Culture

To cultivate a culture of innovation, you must encourage action on creative ideas. Let your employees feel valued, like they have some autonomy in the idea creation process. They should be able to feel safe to share bold or crazy ideas that come to their mind. Trust your team to find new ways to solve problems. If you’ve never failed, you’ve never taken chances. Taking risks is a big part of innovation. You have to remind your employees that failure is inevitable and every idea has a degree of uncertainty, and you can do this by creating a safe environment where you encourage your team to test their innovative ideas and even make mistakes that do not cost the company a huge fortune. The important thing is to learn from your mistakes to ensure that you don’t fail the same way twice. If you hold back on ideas because of the fear of failing, you’ll stay confined to the monotony of the status quo and your business will never make any significant leaps. The important thing to remember is to recover and try again. You can hold pitching contests for your employees and develop new ideas that they will be asked to present in front of management. 


An Introduction to Machine Learning Engineering for Production/MLOps — Phases in MLOps

It is common knowledge that data rules the AI world, pretty much. Our models, at least in the case of supervised learning, are only as good as our data. It is important, especially when working in a team, to be on the same page with regards to the data you have. Consider the same handwriting recognition task that we defined earlier. Suppose you and your team decide to discard poorly clicked images for the time being. Now, what is a poorly clicked image? It might be different for your teammate and it might be different for you. In such cases, it is important to establish a set of rules to define what a poorly clicked image is. Maybe if you struggle to read more than 5 words on the page, you decide to discard it. Something of that sort. This is an extremely important step even in research as having ambiguity in data and labels will only lead to more confusion for the model. Another important thing to be taken into consideration is the type of data you are dealing with, i.e, structured or unstructured. How you work with the data you have largely depends on this aspect. Unstructured data includes images, audio signals, etc and you can carry out data augmentation in these cases to increase the size of your dataset.


Data Scientists and ML Engineers Are Luxury Employees

Apart from the interest in the field, another main reason is a bit more practical. I have spent so much time and energy learning the necessary topics (think probability, statistics, calculus, linear algebra, distributed computing, machine learning, deep learning…) that I want this knowledge to stick in. And we are all humans. Even if you are a genius, if you don’t practice what you learn, the knowledge goes away. So when your boss asks you (for the tenth time in a row) to create a piece of software or an analysis that has nothing to do with machine learning, what is that you think? Are you happy? Another important factor is that the field is moving at lightning speed. It was already the case when I was in software engineering, but now it is not even comparable. Not a day goes by without hearing from the latest breakthrough, the newest shiny deep learning architecture, this great new book that every ML practitioner should read, etc. When you are not practicing ML in your day job, you are left with practicing it during your free time. It is OK for a little while, but it is not sustainable in the long run. We are all humans. We need time off to relax and be with our loved ones. Don’t get me wrong. I love learning new things. 


Neo’s Governance Model Projected to Transform Blockchain Space

From an architectural perspective, Neo N3 has also optimized to deliver a more streamlined user experience, including switching from a UTXO to a pure account model, reconfiguring the virtual machine, adding a state root service, upgrading block synchronization mechanisms, and introducing new data compression mechanisms. Since The release of the Neo N3 TestNet, performance is already up by approximately 50 times, and the MainNet is set to launch soon in the near future. ... Under POW consensus governance models, arithmetic power is the right, and all the newly generated revenue is owned by nodes who maintain a monopoly over arithmetic power. Meanwhile, POS consensus models primarily distribute tokens to those who hold the most money — thus, distribution of benefits under both systems is far from equitable. In addition, POW and POS models require users to pay high processing fees for transferring transactions and using on-chain applications. As a result, platforms such as Ether and EOS have been plagued by high fees, resulting in transaction congestion along with GAS fees worth hundreds of dollars on Ether.


Microsoft Power Platform and low code/no code development: Getting the most out of Fusion Teams

One aspect of the Fusion Teams approach are new tools for professional developers and IT pros, including integration with both Visual Studio and Visual Studio Code. At the heart of this side of the Teams development model is the new PowerFX language, which builds on Excel's formula language and blends in a SQL-like query language. PowerFX lets you export both Power Apps designs and formulas as code, ready for use in existing code repositories, so IT teams can manage Power Platform user interfaces alongside their line-of-business applications. Microsoft has delivered a new Power Platform command line tool, which can be used from the Windows Terminal or from the terminals in its development platforms. The Power Platform CLI can be used to package apps ready for use, as well as to extract code for testing. One advantage of this approach is that a user building their own app in Power Apps can pass it over to a database developer to help with query design. Code can be edited in, say, Visual Studio Code, before being handed back with a ready-to-use query. Fusion teams aren't about forcing everyone into using a lowest common denominator set of tools; they're about building and sharing code in the tools you use the most.


The encouraging acceleration of cloud adoption in financial services

When regulations are constantly evolving, in multiple jurisdictions, a cloud-based approach to CLM is much more agile and adaptable to emerging challenges. Using a system that can be updated to always be compliant, provides risk management teams and ultimately the C-suite and board with the confidence that they are future proofed against evolving regulation and will avoid hefty financial penalties from regulators. ... Transformation plans rarely, if ever, begin and end in any one CIO’s tenure – they are a continual process to move things forward for the organisation – but the efforts of individual leaders need to pave the way for the next without tying their hands and forcing them down a path that may present issues later down the line. ... Whether banks are just looking to digitise existing processes or to use AI and ML to make more intelligent decisions and look for fraudulent behavioural patterns, the fact that more conversations are being had in the financial service world about cloud, or that these conversations are going somewhere, gives me confidence that we’re moving in the right direction and there are good days to come.


The chip shortage is real, but driven by more than COVID

The problem is that demand is so great that existing production capacity can’t keep up. Before there was COVID, digital transformation was driving sales. “There was a pretty large movement in the enterprise towards more digitalization across different sectors of the markets in different verticals,” said Morales. “I think the pandemic only accelerated that,” he said. “All of the connected everythings--smart cities, smart roadway, smart campuses, smart airports, smart, autonomous everything--I think this [shortage] was going to happen anyway, it just happened faster,” said Fenn. Another problem facing chip makers is that demand for processors was across-the-board, much of it for older technology that isn’t the first choice for what the vendors would like to sell. Intel, TSMC, GlobalFoundries, Samsung, and other advanced chip makers are pushing into 7nm and 5nm designs that smart refrigerators and cars don’t need. They do fine with 40nm or 28nm designs, and no one is investing in more fabs to make more. So the existing older fabs will continue to run at full capacity for the foreseeable future, with no room for error and no plans to build more.


Easy Guide to Remote Pair Programming

Solitary programmers who feel well programming alone and are efficient shouldn’t be forced to pair program. There are so many reasons why one would like to work alone, and not in a pair. We can think Think about people who are very introverted, deep experts in a difficult domain, or people who aren’t used to collaborating with other people. No practice should be forced on anyone, but rather explained, slowly introduced; we need to know and accept that some people won’t like it, and won’t use it. Another situation when (remote) pair programming doesn’t work is when there is a strong push against collaboration in the whole organization. The management can instill these values that we need to work on individually; everyone needs to be evaluated for their own individual work as otherwise evaluation will be very difficult. There can be many situations where accountancy, evaluation and task-keeping needs to be written according to the particular rules of the organization. Pair programming won’t work in this environment. There are also organizations where there are strong silos, and you might be able to work in a pair in your own narrow specialization, but never with other specialization.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell

Daily Tech Digest - July 05, 2021

Era of Quantum Computers

Quantum computers will disrupt almost every industry and could contribute greatly in the fields of finance, military affairs, intelligence, environment, deep-space exploration, drug design and discovery, aerospace engineering, utilities like nuclear fusion, polymer design, artificial intelligence, big data search, and digital manufacturing. Quantum computers will not only solve all of life’s most complex problems and mysteries, but will soon empower all A.I. systems, acting as the brains of these super-human machines. Teachers can use quantum computing as an object lesson to introduce high-level concepts e.g., the physics behind quantum machines offers avenue of exploration. Quantum computers will personalize higher education. The power and speed of quantum computing may best serve the individualized needs of the students in visualizing adaptive learning models. It constrains the space to make it more understandable and provides theoretical concepts a practical application. In broader picture, quantum computing will raise the bar in digital literacy. For students, quantum technologies are their future and they must have an early understanding of the fundamentals.


Role of Continuous Monitoring in DevOps Pipeline

It is an automated process that helps DevOps teams in the early detection of compliance issues that occur at different stages of the DevOps process. As the number of applications deployed on the cloud grows, the IT security team must adopt various security software solutions to mitigate the security threats while maintaining privacy and security. Continuous Monitoring in DevOps is also called Continuous Control Monitoring (CCM). It is not restricted to just DevOps but also covers any area that requires attention. It provides necessary data sufficient to make decisions by enabling easy tracking and rapid error detection. It provides feedback on things going wrong, allowing teams to analyze and take timely actions to rectify problematic areas. It is easily achievable using good Continuous Monitoring tools that are flexible across different environments – whether on-premise, in the cloud, or across containerized ecosystems – to watch over every system all the time. At the time of the production release of the software product, Continuous Monitoring notifies the Quality analysts about any concerns arising in the production environment.


Why data is the real differentiator in D2C retail

Data fabrics offer organisations, both within and outside the retail sector, centralised access and a single, unified view of data across their entire enterprise. This can be taken one step further with the use of ‘smart’ data fabrics, which embed a wide range of analytics capabilities, making it faster and easier for brands and retailers to gain new insights and power intelligent predictive and prescriptive services and applications. For retail organisations reluctant to replace siloed systems due to the expectation that the cost would be prohibitive, smart data fabrics mark a way for them to continue to leverage their existing investments by allowing existing legacy applications and data to remain in place. This means enterprises can bridge legacy and modern infrastructure without having to “rip-and-replace” any of their existing technology. When it comes to adopting a D2C model, this approach will allow brands and retailers to harness data from across their different channels to better understand their customers. This will empower them to provide the right types of experiences and interactions and to gain a more informed understanding of the types of products their customers desire, for example.


How Outsourcing Practices Are Changing in 2021: an Industry Insight

The tech ecosystem had already embraced the Fourth Industrial Revolution in terms of advancing technologies. But the outsourcing community was still a step behind. It still relied on humans for the majority of work. As the pandemic ushered in the future of work, outsourcing changed. A new digital outsourcing model emerged to help outsourcing approaches be at par with the Fourth Industrial Revolution. As the majority of businesses have embraced the technology revolution, outsourcers are also gearing up for the same. These technologies in outsourcing will enable both parties to become more flexible, resilient, efficient, and productive while driving stable revenue. More organizations are strategically incorporating these evolving technologies into their policies in the coming times. ... Businesses are now looking forward to more sustainable practices in outsourcing to continue having a long-term relationship. The pandemic forced businesses to revoke their outsourcing contracts with companies mostly because they couldn’t trust their project during uncertain times. 


How AI is helping enterprises turn the tables on malicious attacks

The major benefit of AI security tools is how they can address the needle in the haystack problem, Kler says. Humans cannot handle the proliferation of data points and the massive amounts of data pouring into the system, but AI is very good at identifying, filtering, and prioritizing threat warnings. “It replaces the two overwhelmed SIEM guys trying to filter the millions of alerts in your SOC center,” Kler says. “AI can prioritize and correlate alerts, then direct your attention to the next urgent task.” In the future, AI will also help us in threat hunting in the network, uncovering fine correlations and statistical anomalies to highlight them for security teams. AI can also be used for overall threat intelligence, predicting when, where, and what kind of attacks your organization might be facing next — predictive maintenance, in other words, to determine what’s going to go wrong next. For instance, if attacks on medical facilities ramp up, it can warn you that your own medical facility is now at increased risk. But remember that AI is not a silver bullet that’s going to solve every security issue, Kler says.


McKinsey: These are the skills you will need for the future of work

Our research suggests governments could consider reviewing and updating curricula to focus more strongly on the DELTAs. Given the weak correlation between proficiency in self-leadership and interpersonal DELTAs and higher levels of education, a strong curricula focus on these soft skills may be appropriate. Governments could also consider leading further research. Many governments and academics have started to define the taxonomies of the skills citizens will require, but few have done so at the level described here. Moreover, few, if any, have undertaken the considerable amount of research required to identify how best to develop and assess such skills. For instance, for each DELTA within the curriculum, research would be required to define progression and proficiency levels achievable at different ages and to design and test developmental strategies and assessment models. The solutions for different DELTAs are likely to differ widely. For example, the solutions to develop and assess “self-awareness and self-management” would differ from those required for “work-plan development or “data analysis.” 


Beginner’s Guide To Lucid: A Network For Visualizing Neural Networks

Lucid is a library that provides a collection of infrastructure and tools to help research neural networks and understand how neural networks make interpretations and decisions based on the input. It is a step up from DeepDream and provides flexible abstractions so that it can be used for a wide range of interpretability research. Lucid helps us know the how and why of a given prediction. This makes the end-user understand the reasons for the occurrence of such. There is a growing keen interest that neural networks need to be interpretable to humans for research purposes and better understanding. The field of neural network interpretability has formed to help with these concerns. Lucid makes use of convolutional neural networks, which have many convolutional layers. At first glance, the early layers look for basic lines and simple shapes and patterns from the input image. The results from this layer keep propagating forward and further respond to more understandable inputs; this information then goes forward to generate the output from the final layers.


4 ways the coder community can help fix its diversity problem

Open source, by design, welcomes diversity because anyone can contribute to software code from anywhere in the world. Teams are often geographically distributed, which leads to more diversity, and that correlates with positive results to team output, research shows. We witnessed open source’s diversity-powered resilience in action last year. As the pandemic bore down, GitHub, the largest open source developer platform with more than 50 million developers, found the developer activity remained consistent—or even increased. If the pandemic reduced developer activity in one region more than another, at one time or another, the geographic diversity of the community may have mitigated the impact. To some extent, that happens every year as different regions go more quiet than others for holidays, such as Christmas in the Western world and Lunar New Year in China. In the past three decades, open source has moved from the fringe of software development to the core, and it has transformed how software is built and made.


What really is consumable analytics?

Put simply, consumable analytics visualises data. It brings together vast amounts of information and presents it in a straightforward and easy-to-understand format, so that as the user navigates the business system, they are exposed to the patterns and trends they need without having to manually search for that data. Every record becomes a dashboard that can be easily interpreted by the user, alerting teams to key data insights in real time and allowing them to take appropriate action quickly. Let’s take a change in total monthly revenue. This could indicate a variety of issues, such as inaccurate forecasting or a poor sales period, much in the same way that a sharp increase in customer help desk requests could indicate a faulty product line or technical problems online. This kind of information would take considerable time and man-power and can easily be caught too late if there is not a specialist team consistently monitoring these reports. Consumable analytics flag these changes as they happen, saving time and resources to identify the problem and focus on a solution. 


CISA Emphasizes Urgency of Avoiding 'Bad' Security Practices

The continued use of outdated and unsupported hardware is a long-standing cybersecurity problem, says Erich Kron, a former security manager for the U.S. Army’s 2nd Regional Cyber Center. "End-of-life and old software often lacks the ability to be patched, leaving known vulnerabilities for attackers to exploit," he says. "Hard-coded passwords, or the inability to handle complex or secure passwords, is a significant risk in both the private and public sectors." Kron, a security awareness advocate for the security firm KnowBe4, adds that the bad practices catalog from CISA "makes for good overall guidance for improvements in cyber hygiene. There is power in the government setting the example for the private sector by bringing light to these bad practices." Frank Downs, a former U.S. National Security Agency offensive analyst, offers a similar perspective. "This collection of practices can act as a single point of truth for the field … a universal touchstone that can provide a baseline for all organizations. 



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad

Daily Tech Digest - July 04, 2021

The importance of Robotic Process Automation

Michael believes that RPA will grow in importance in the future for a number of reasons. Firstly, understanding. It’s no longer an unknown technology. So many large organizations have Digital Workforces and so the worry and uncertainty around them have gone. Secondly, there is a real drive to add ‘Intelligent’ ahead of ‘Automation’. Whilst we aren’t quite at the widespread adoption of ‘intelligent Automation’ just yet, these cognitive elements are getting better and more available each week. Once we have more use cases then we will see the early adopters of RPA start to take the next step and begin to ‘add the human back into the robot’. Thirdly – the net cost of RPA is decreasing. There are now community versions available free of charge, additional software given as part of the platforms, and training available for free. The barriers to entry are disappearing Furthermore, Mahesh highlights that the global pandemic and the economic crisis has put a lot of organizations in a state of flux, made them change business processes, and has also highlighted the need for more automation through RPA.


How AI Is Changing The Real Estate Landscape

AI has applications in estimating the market value of properties and predicting their future price trajectory. For example, ML algorithms combine current market data and public information such as mobility metrics, crime rates, schools, and buying trends to arrive at the best pricing strategy. The AI uses a regression algorithm– accounting for property features such as size, number of rooms, property age, home quality characteristics, and macroeconomic demographics–to calculate the best price range. To wit, the AI algorithms can predict the prices based on the geographic location or future development. Online real estate marketplace Zillow puts out home valuations for 104 million homes across the US. The company, founded by former Microsoft executives, uses cutting edge statistical and machine learning models to vet hundreds of data points for individual homes. Zillow employs a neural network-based model to extract insights from huge swathes of data and tax assessor records and direct feeds from hundreds of multiple listing services and brokerages.


Quantum Computing just got desktop sized

Quantum computing is coming on leaps and bounds. Now there’s an operating system available on a chip thanks to a Cambridge University-led consortia with a vision is make quantum computers as transparent and well known as RaspberryPi. This “sensational breakthrough” is likened by the Cambridge Independent Press to the moment during the 1960s when computers shrunk from being room-sized to being sat on top of a desk. Around 50 quantum computers have been built to date, and they all use different software – there is no quantum equivalent of Windows, IOS or Linux. The new project will deliver an OS that allows the same quantum software to run on different types of quantum computing hardware. The system, Deltaflow.OS (full name Deltaflow-on-ARTIQ) has been designed by Cambridge Uni startup Riverlane. It runs on a chip developed by consortium member SEEQC using a fraction of the space necessary in previous hardware. SEEQC is headquartered in the US with a major R&D site in the UK. “In its most simple terms, we have put something that once filled a room onto a chip the size of a coin, and it works,” said Dr. Matthew Hutchings.


This Week in Programming: GitHub Copilot, Copyright Infringement and Open Source Licensing

On the idea of copyright infringement, Guadamuz first points to a research paper by Alber Ziegler published by GitHub, which looks at situations where Copilot reproduces exact texts, and finds those instances to be exceedingly rare. In the original paper, Ziegler notes that “when a suggestion contains snippets copied from the training set, the UI should simply tell you where it’s quoted from,” as a solution against infringement claims. On the idea of the GPL license and “derivative” works, Guadamuz again disagrees, arguing that the issue at hand comes down to how the GPL defines modified works, and that “derivation, modification, or adaptation (depending on your jurisdiction) has a specific meaning within the law and the license.” “You only need to comply with the license if you modify the work, and this is done only if your code is based on the original to the extent that it would require a copyright permission, otherwise it would not require a license,” writes Guadamuz. “As I have explained, I find it extremely unlikely that similar code copied in this manner would meet the threshold of copyright infringement, there is not enough code copied...”


Django Vs Express: The Key Differences To Observe in 2021

Django is an Python framework that provides rapid development. It has a pragmatic and clean design. It is recognized for having a ‘batteries included’ viewpoint, hence it is ready to be utilized. Here are some of the vital features of Django: Django takes care of content management, user authentication, site maps, and RSS feeds effectively; Extremely fast: This framework was planned to aid programmers to take web applications from the initial conception to project completion as rapidly as possible. ...  Express.js is a flexible and minimal Node.js web app framework that supplies a vigorous set of traits for mobile and web-based apps. With innumerable HTTP utility approaches and middleware at disposal, making a dynamic API is easy and quick. Numerous popular web frameworks are constructed on this framework. Below are some of the noteworthy features of Express.js: Middleware is a fragment of the platform that has access to the client request, database, and other such middlewares. It is primarily accountable for the organized organization of dissimilar functions of this framework; Express.js supplies several commonly utilized traits of Node.js in the kind of functions that can be freely employed anywhere in the package.


Unleashing the Power of MLOps and DataOps in Data Science

Data is overwhelming, and so is the science of mining, analyzing, and delivering it for real-time consumption. No matter how much data is good for business, it is still vulnerable to putting the privacy of millions of users at unimaginable risk. That is exactly why there is a sudden inclination towards more automated processes. In the past year, enterprises sticking to conventional analytics have realized that they will not survive any longer without a makeover. For example, enterprises are experimenting with micro-databases, each storing master data for a particular business entity only. There is also an increase in the adoption of self-servicing practices to discover, cleanse, and prepare data. They have understood the importance of embracing the ‘XOps’ mindset and delegate more important roles to MLOps and DataOps practices. Now, MLOps are important because bringing ML models to execution is more difficult than training them or deploying them as APIs. The complication further worsens in the absence of governance tools. 


TrickBot Spruces Up Its Banking Trojan Module

TrickBot is a sophisticated (and common) modular threat known for stealing credentials and delivering a range of follow-on ransomware and other malware. But it started out as a pure-play banking trojan, harvesting online banking credentials by redirecting unsuspecting users to malicious copycat websites. According to researchers at Kryptos Logic Threat Intelligence, this functionality is carried out by TrickBot’s webinject module. When victim attempts to visit a target URL (like a banking site), the TrickBot webinject package performs either a static or dynamic web injection to achieve its goal, as researchers explained: “The static inject type causes the victim to be redirected to an attacker-controlled replica of the intended destination site, where credentials can then be harvested,” they said, in a Thursday posting. “The dynamic inject type transparently forwards the server response to the TrickBot command-and-control server (C2), where the source is then modified to contain malicious components before being returned to the victim as though it came from the legitimate site.”


How a college student founded a free and open source operating system

FreeDOS was a very popular project throughout the 1990s and into the early 2000s, but the community isn’t as big these days. But it’s great that we are still an engaged and active group. If you look at the news items on our website, you’ll see we post updates on a fairly regular basis. It’s hard to estimate the size of the community. I’d say we have a few dozen members who are very active. And we have a few dozen others who reappear occasionally to post new versions of their programs. I think to maintain an active community that’s still working on an open source DOS from 1994 is a great sign. Some members have been with us from the very beginning, and I’m really thankful to count them as friends. We do video hangouts on a semi-regular basis. It’s great to finally “meet” the folks I’ve only exchanged emails with over the years. It's meetings like this when I remember open source is more than just writing code; it's about a community. And while I've always done well with our virtual community that communicates via email, I really appreciated getting to talk to people without the asynchronous delay or artificial filter of email—making that real-time connection means a lot to me.


Let Google Cloud’s predictive services autoscale your infrastructure

Predictive autoscaling uses your instance group’s CPU history to forecast future load and calculate how many VMs are needed to meet your target CPU utilization. Our machine learning adjusts the forecast based on recurring load patterns for each MIG. You can specify how far in advance you want autoscaler to create new VMs by configuring the application initialization period. For example, if your app takes 5 minutes to initialize, autoscaler will create new instances 5 minutes ahead of the anticipated load increase. This allows you to keep your CPU utilization within the target and keep your application responsive even when there’s high growth in demand. Many of our customers have different capacity needs during different times of the day or different days of the week. Our forecasting model understands weekly and daily patterns to cover for these differences. For example, if your app usually needs less capacity on the weekend our forecast will capture that. Or, if you have higher capacity needs during working hours, we also have you covered.
Why should you try it?


The IoT Cloud Market

Cloud computing and the Internet of Things (IoT) have become inseparable when one or the other is discussed and with good reason: You really can’t have IoT without the cloud. The cloud, a grander idea that stands on its own, is nonetheless integral to the IoT platform’s success. The Internet of Things is a system of unrelated computing devices, mechanical and digital machines, objects, and other devices provided with unique identifiers (an IP address) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. Whereas the traditional internet consists of clients – PCs, tablets, and smartphones, primarily – the Internet of Things could be cars, street signs, refrigerators, or watches. Whereas traditional Internet input and interaction relies on human input, IoT is almost totally automated. Because the bulk of IoT devices are not in traditional data centers and almost all are connected wirelessly, they are reliant on the cloud for connectivity. For example, connected cars that send up terabytes of telemetry aren’t always going to be near a data center to transmit their data, so they need cloud connectivity.



Quote for the day:

"Strong convictions precede great actions." -- James Freeman Clarke

Daily Tech Digest - July 03, 2021

DeepMind AGI paper adds urgency to ethical AI

Despite assurances from stalwarts that AGI will benefit all of humanity, there are already real problems with today’s single-purpose narrow AI algorithms that calls this assumption into question. According to a Harvard Business Review story, when AI examples from predictive policing to automated credit scoring algorithms go unchecked, they represent a serious threat to our society. A recently published survey by Pew Research of technology innovators, developers, business and policy leaders, researchers, and activists reveals skepticism that ethical AI principles will be widely implemented by 2030. This is due to a widespread belief that businesses will prioritize profits and governments continue to surveil and control their populations. If it is so difficult to enable transparency, eliminate bias, and ensure the ethical use of today’s narrow AI, then the potential for unintended consequences from AGI appear astronomical. And that concern is just for the actual functioning of the AI. The political and economic impacts of AI could result in a range of possible outcomes, from a post-scarcity utopia to a feudal dystopia. It is possible too, that both extremes could co-exist.


Distributed DevOps Teams: Supporting Digitally Connected Teams

The teams using the visualization board were in different countries, so they needed to address digital connection across time zones. This meant a more robust process for things like retrospectives, more robust breakdown of stories into tasks, more "scheduled" time for showcase and issue resolution, etc. The team found that, while they worried a more defined process would stymie their agility, it worked well in focusing their activities productively in line with the broader objectives, without the necessity of being in constant communication. They found they needed more overlapping work time, particularly when they were in release planning and deployment. And they had to think about and plan task/work turnover to the other team at the end of each day – something they never had to do when in physical proximity. They’ve seen some team members fall back into role-based activities more often. There simply isn’t the natural communication and subsequent spark of curiosity that is truly the hallmark of team collaboration.


The Cost of Managed Kubernetes - A Comparison

Running a Kubernetes cluster in EKS, you get the possibility of using either a standard Ubuntu image as the OS for your nodes, or you can use their optimized EKS AMIs. This can help you get some better speed and performance rather than running a generic OS. Once the cluster is running, there’s no way to enable automatic upgrades of the Kubernetes version. While EKS does have excellent documentation on how to upgrade your cluster, it is a manual process. If your nodes start reporting failures, EKS doesn’t have a way of enabling auto-repair like in GKE. This means you’ll have to either monitor that yourself and manually fix nodes or set up your own system to repair broken nodes. As with GKE, you pay an administration fee of $0.10 per hour per cluster when running EKS, after which you only pay for the underlying resources. If you want to run your cluster on-prem, it’s possible to do so either by using AWS Outposts or EKS Anywhere, which launches sometime in 2021.


Resetting Your IoT Device Before Reselling It Isn't Enough, Researchers Find

Those that had reset their devices, however, hadn’t quite wiped the slate clean in the way they thought they had. Researchers found that, contrary to what Amazon says, you can actually recover a lot of sensitive personal data stored on factory reset devices. The reason for this is related to how these devices store your information using NAND flash memory—a storage medium that, due to certain processes, doesn’t actually delete the data when the device is reset. “We show that private information, including all previous passwords and tokens, remains on the flash memory, even after a factory reset. This is due to wear-leveling algorithms of the flash memory and lack of encryption,” researchers write. “An adversary with physical access to such devices (e.g., purchasing a used one) can retrieve sensitive information such as Wi-Fi credentials, the physical location of (previous) owners, and cyber-physical devices (e.g., cameras, door locks).” Granted, said hypothetical snoopers would really have to know what they were doing—and their data thieving would entail a certain amount of expertise.


Defeating Ransomware-as-a-Service? Think Intel-Sharing

In addition to technological solutions, a necessary element in building a strong cybersecurity foundation is working with all internal and external stakeholders, including law enforcement. More data helps enable more effective responses. Because of this, cybersecurity professionals must openly partner with global or regional law enforcement, like US-CERT. Sharing intelligence with law enforcement and other global security organizations is the only way to effectively take down cybercrime groups. Defeating a single ransomware incident at one organization does not reduce the overall impact within an industry or peer group. It’s a common practice for attackers to target multiple verticals, systems, companies, networks and software. To make it more difficult and resource-intensive for cybercriminals to attack, public and private entities must collaborate by sharing threat information and attack data. Private-public partnerships also help victims recover their encrypted data, ultimately reducing the risks and costs associated with the attack. Visibility increases as public and private entities band together.


Maintaining a Security Mindset for the Cloud Is Crucial

A lot of organizations are moving from traditional on-premises application deployments into one or multiple clouds. Now, those transitions carry with them architectural baggage of how to architect networking and security elements for this new cloud era, where applications are distributed all around in one multi-cloud, software-as-a-service environment or even edge computing environments. And so security is becoming very, very paramount to the success of that motion. Now, we also know that security attacks are becoming increasingly sophisticated, and that’s especially true when applications are moving to the cloud. And cloud infrastructure is not always to the same level of capabilities and features that enterprises have been used to in their on-premises environments. So, this security-oriented mindset is extremely important for building these networks that now span not only the on-premises environment, but also cloud environments. 


DevOps Automation: How Is Automation Applied In DevOps Practice

We can see the automation being carried out at every phase of the development starting from triggering of the build, carrying out unit testing, packaging, deploying on to the specified environments, carrying out build verification tests, smoke tests, acceptance test cases and finally deploying on to the final production environment. Even when we say automating test cases, it is not just the unit tests but installation tests, integration tests, user experience tests, UI tests etc. DevOps forces the operations team, in addition to development activities, to automate all their activities, like provisioning the servers, configuring the servers, configuring the networks, configuring firewalls, monitoring the application in the production system. Hence to answer what to automate, it is build trigger, compiling and building, deploying or installing, automating infrastructure set up as a coded script, environment configurations as a coded script, needless to mention testing, post-deployment life performance monitoring in life, logs monitoring, monitoring alerts, pushing notifications to live and getting alerts from live in case of any errors and warnings etc


Kubernetes-Run Analytics at the Edge: Postgres, Kafka, Debezium

Implementing databases and data analytics within cloud native applications involves several steps and tools from data ingestion, preliminary storage, to data preparation and storage for analytics and analysis. An open, adaptable architecture will help you execute this process more effectively. This architecture requires several key technologies. Container and Kubernetes platforms provide a consistent foundation for deploying databases, data analytics tools, and cloud native applications across infrastructure, as well as self-service capabilities for developers and integrated compute acceleration. PostgreSQL, Apache Kafka and Debezium can be deployed using Kubernetes Operators on Kubernetes to provide a cloud native data analytic solution that be can be used for a variety of use cases and across hybrid cloud environments — including datacenter, public cloud infrastructure, and the edge — for all stages of cloud native application development and deployment. 


DevOps Testing Tutorial: How DevOps Will Impact QA Testing?

Although there are subtle differences between Agile and DevOps Testing, those working with Agile will find DevOps a little more familiar to work with (and eventually adopt). While Agile principles are applied successfully in the development & QA iterations, it is a different story altogether (and often a bone of contention) on the operations side. DevOps proposes to rectify this gap. Now, instead of Continuous Integration, DevOps involves “Continuous Development”, where the code was written and committed to Version Control, will be built, deployed, tested and installed on the Production environment that is ready to be consumed by the end-user. This process helps everyone in the entire chain since environments and processes are standardized. Every action in the chain is automated. It also gives freedom to all the stakeholders to concentrate their efforts on designing and coding a high-quality deliverable rather than worrying about the various building, operations, and QA processes. It brings down the time-to-live drastically to about 3-4 hours, from the time code is written and committed, to deployment on production for end-user consumption.



Where Can An Agile Transformation Lead Your Company?

The rituals of Agile development are largely procedural and tactical. In contrast, organizational agile transformation is driven by and reinforces cultural norms that make staying agile possible. A development lead can compel team members to participate in the process of daily scrums and weekly sprints. Agile development doesn’t address the task of building genuine collaboration or a culture of accountability. In contrast, an agile transformation requires cultural support to move the organization into a state of resonant agility. The state, in turn, reinforces and strengthens norms of collaboration and accountability that an agile culture encourages. An agile culture takes a broader view, beyond providing a prescriptive process for building something specific. It pulls together stakeholders from multiple functional areas to tackle an issue through organic, collaborative analysis. ... Next-generation technologies are purpose-built, not broad platforms that force conformity instead of innovation. There’s no one platform or suite of tools for an agile organization. Teams work with an organic tech stack that gives them the flexibility to use the best tool for the job, and everyone’s job is different.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard