Daily Tech Digest - February 13, 2021

Why Your Next CIO Will Be a CPO

The role of the CIO was to deploy technology efficiently to support the company’s strategies and plans. The role of the CPO, as inherited from pure technology companies, is to develop and maintain a deep understanding of the customer and market and guide the delivery of products to best meet and monetize their needs, and do so ahead of any and all competition. The traditional CIO derives the why and what from other parts of the organization and supplies the how. Transitional versions of the CIO and the CDO and other neologisms may start to encroach on the what. But the true CPO drives the why and what—and the how if they also have engineering, or collaborates on the how with a CTO or head of development if not. Does this sound broad, even encroaching on CEO territory? Well, yes. It’s no accident that former product chiefs are the new CEOs of Google and Microsoft. So what does that mean for you if you are in an IT organization? Well, first, while your organization may or may not change the actual title to CPO from CIO, it’s important for your career to recognize when the definition of their job becomes that of what a CPO would do in a “pure” software company.


The most fundamental skill: Intentional learning and the career advantage

Stanford psychologist Carol Dweck’s popular work on growth suggests that people hold one of two sets of beliefs about their own abilities: either a fixed or a growth mindset. A fixed mindset is the belief that personality characteristics, talents, and abilities are finite or fixed resources; they can’t be altered, changed, or improved. You simply are the way you are. People with this mindset tend to take a polar view of themselves—they consider themselves either intelligent or average, talented or untalented, a success or a failure. A fixed mindset stunts learning because it eliminates permission not to know something, to fail, or to struggle. Writes Dweck: “The fixed mindset doesn’t allow people the luxury of becoming. They have to already be.”2 In contrast, a growth mindset suggests that you can grow, expand, evolve, and change. Intelligence and capability are not fixed points but instead traits you cultivate. A growth mindset releases you from the expectation of being perfect. Failures and mistakes are not indicative of the limits of your intellect but rather tools that inform how you develop. A growth mindset is liberating, allowing you to find value, joy, and success in the process, regardless of the outcome. 


The Dos and Don’ts for SMB Cybersecurity in 2021

With insider threats accounting for the largest majority of cyberattacks, SMBs need to get to the root of the problem — human behavior. Inspiring change begins with raising awareness. To do this effectively, SMBs must first reflect on their business as a whole. This means identifying every “weak point” and addressing every potential impact the business could suffer if those weak points were targeted. For instance, many SMBs operate across supply chains, which include various virtual and physical touchpoints. Because of this, if one section of the supply chain were to get hit by a cyberattack, the entire system could come crumbling down. By gathering and sharing this information in consistent organizationwide training sessions that inform and entertain, SMBs can empower their staff with deeper threat awareness and help improve their individual security posture. ... SMBs should consider bringing on external experts to regularly analyze their IT infrastructure. This will ensure that they have an unbiased opinion to the business’ needs and the strongest protection possible. Coupled with this, SMBs should regularly conduct internal security audits to better understand where hidden back doors exist across their organization.


Can Care Robots Improve Quality Of Life As We Age?

The new generation of care robots do far more than just manual tasks. They provide everything from intellectual engagement to social companionship that was once reserved for human caregivers and family members. When it comes to replicating or substituting human connection, designers must be intentional about what outcomes these robots are designed to achieve. To what degree are care robots facilitating and maximizing emotional connection with others (a personified AI assistant that helps you call your grandchildren, for example) or providing the actual connection itself (such as a robot that appears as a huggable, strokable pet)? Research suggests that an extensive social network offers protection against some of the intellectual effects of aging. There could also be legitimate uses for this kind of technology in mental health and dementia therapy, where patients are not able to care for a “real” pet or partner. Some people might also find it easier to bond or be vulnerable with an objective robot than a subjective human. Yet the risks and externalities of robots as social companions are not yet well understood. Would interacting with artificial agents lead some people to engage less with the humans around them, or develop intimacy with an intelligent robot?


IBM and ExxonMobil are building quantum algorithms to solve this giant computing problem

Research teams from energy giant ExxonMobil and IBM have been working together to find quantum solutions to one of the most complex problems of our time: managing the tens of thousands of merchant ships crossing the oceans to deliver the goods that we use every day. The scientists lifted the lid on the progress that they have made so far and presented the different strategies that they have been using to model maritime routing on existing quantum devices, with the ultimate goal of optimizing the management of fleets. ... Although the theory behind the potential of quantum computing is well-established, it remains to be found how quantum devices can be used in practice to solve a real-world problem such as the global routing of merchant ships. In mathematical terms, this means finding the right quantum algorithms that could be used to most effectively model the industry's routing problems, on current or near-term devices. To do so, IBM and ExxonMobil's teams started with widely-used mathematical representations of the problem, which account for factors such as the routes traveled, the potential movements between port locations and the order in which each location is visited on a particular route.


Palo Alto Networks Joins Flexible Firewall Party. Will Cisco Follow Suit?

In addition to migrating workloads to public clouds, companies also started demanding a cloud-like experience in their data centers. This includes consumption-based pricing and the flexibility to scale usage and add services on demand. “And what we’re now doing is bringing extreme flexibility, simplicity, and agility to the network security and software firewalls,” Gupta said. “So that’s why we’re reinventing yet again how customers buy these software firewalls and security subscriptions. And I hope that the industry will adopt that model and make it easier for customers.” However, other leading firewall vendors already adopted similar consumption-based licensing approaches. Fortinet, Forcepoint, and Check Point rolled theirs out last year. Fortinet’s programs aim to give its virtual firewall customers more flexibility in how they consume those products and security services, said Vince Hwang, senior director of products at Fortinet. ... “They can allocate the points to any virtual firewall size and type of security services in seconds without incurring a procurement cycle. These virtual firewalls and security services can be used on any cloud and anytime. Customers can manage their consumption through a central portal available through Fortinet’s FortiCare service.”


India's Blockchain Ecosystem Is a Hotbed Of Crypto Innovation

Advancements in artificial intelligence have led to the development of automated decentralized finance strategies to replace the role of traditional fund managers, monitoring the market to identify the best risk-adjusted assets to deliver investment returns. Rocket Vault Finance leverages these advanced artificial intelligence predictive analysis tools and machine learning algorithms to develop data-driven, intelligent, and automated investment strategies to minimize losses and maximize gains. They consistently achieve over 100% APY returns for stablecoin capital and avoid managing multiple crypto assets over a range of liquidity mining, staking, or other defi platforms, reducing fees and risk. Rocket Vault Finance is free to use for retail investors holding the platform’s RVF tokens, with paid services on offer to institutional investors, providing an automated hybrid alternative to riskier yield farming projects and traditional market returns. Several other projects are also contributing to the rapidly growing Indian blockchain ecosystem, expanding the value proposition as a result.


The virtual security guard: AI-based security startups have been the toast of the town, here’s why

As the threat landscape evolves, security providers have to be always on their toes, and businesses have to adopt a more unified approach to cyber risk management. Some of the biggest challenges that security and risk management leaders face are the lack of a consistent view at a micro and macro level, the ability to prioritise what’s most critical, and maintaining transparency across the organisation when it comes to cybersecurity. “SAFE is built on the premise of these challenges and our ability to provide realtime visibility at both a granular IP level and at an organisational level across people, process, technology, cybersecurity products, and third parties brings a completely new approach to enterprise cyber risk management,” says Saket Modi, Co-founder & CEO, Safe Security, a cybersecurity platforms company. ... Growing at a mindboggling 450 per cent, WiJungle, another AI-based security startup uses AI for automation at the network level and threat detection and analysis. The NetSec (network security) vendor offers a solution for office and remote network security.


How to adopt DevSecOps successfully

The DevSecOps manifesto says that the reason to integrate security into dev and ops at all levels is to implement security with less friction, foster innovation, and make sure security and data privacy are not left behind. Therefore, DevSecOps encourages security practitioners to adapt and change their old, existing security processes and procedures. This may be sound easy, but changing processes, behavior, and culture is always difficult, especially in large environments. The DevSecOps principle's basic requirement is to introduce a security culture and mindset across the entire application development and deployment process. This means old security practices must be replaced by more agile and flexible methods so that security can iterate and adapt to the fast-changing environment. ... Clearly, the biggest and most important change an organization needs to make is its culture. Cultural change usually requires executive buy-in, as a top-down approach is necessary to convince people to make a successful turnaround. You might hope that executive buy-in makes cultural change follow naturally, but don't expect smooth sailing—executive buy-in alone is not enough. To help accelerate cultural change, the organization needs leaders and enthusiasts that will become agents of change.


Web shell attacks continue to rise

The escalating prevalence of web shells may be attributed to how simple and effective they can be for attackers. A web shell is typically a small piece of malicious code written in typical web development programming languages (e.g., ASP, PHP, JSP) that attackers implant on web servers to provide remote access and code execution to server functions. Web shells allow attackers to run commands on servers to steal data or use the server as launch pad for other activities like credential theft, lateral movement, deployment of additional payloads, or hands-on-keyboard activity, while allowing attackers to persist in an affected organization. As web shells are increasingly more common in attacks, both commodity and targeted, we continue to monitor and investigate this trend to ensure customers are protected. In this blog, we will discuss challenges in detecting web shells, and the Microsoft technologies and investigation tools available today that organizations can use to defend against these threats. We will also share guidance for hardening networks against web shell attacks. Attackers install web shells on servers by taking advantage of security gaps, typically vulnerabilities in web applications, in internet-facing servers.



Quote for the day:

"Change the changeable, accept the unchangeable, and remove yourself from the unacceptable." -- Denis Waitley

Daily Tech Digest - February 12, 2021

Remote work at industrial sites brings extra cyber risk

Consider an automation engineer who needs access to control system configuration data remotely to analyze and optimize an industrial process. Giving remote access directly to the engineering workstation for the control system increases cybersecurity risk for an industrial company. In many cases, these control systems are 20 or even 30 years old, so they weren't built with cybersecurity in mind. Because of their critical nature in driving revenue for the business, they are shut down and upgraded very infrequently as compared to IT systems. It is not uncommon to have these control systems run for five to 10 years between shutdown and maintenance routines. Therefore, they often contain known cybersecurity vulnerabilities that are unpatched even if those patches have been available for years. So, back to our example of the automation engineer, it would be very risky to enable direct access to the control system engineering workstation over the public internet even if the engineer connects to a corporate VPN first from their home office. As a result, we recommend industrial customers maintain separate copies of their industrial control system configurations in an asset management system that the engineer can access remotely.


10 Top Open Source API Gateways and Management Tools

Kong Gateway (OSS) is a popular, open-source, and advanced cloud-native API gateway built for universal deployment: it can run on any platform. It is written in Lua programming language and supports hybrid and multi-cloud infrastructure, and it is optimized for microservices and distributed architectures. At its core, Kong is built for high performance, extensibility, and portability. Kong is also lightweight, fast, and scalable. It supports declarative configuration without a database, using in-memory storage only, and native Kubernative CRDs. Kong features load balancing (with different algorithms), logging, authentication (support for OAuth2.0), rate-limiting, transformations, live monitoring, service discovery, caching, failure detection and recovery, clustering, and much more. Importantly, Kong supports the clustering of nodes and serverless functions. It supports the configuration of proxies for your services, and serve them over SSL, or use WebSockets. It can load balance traffic through replicas of your upstream services, monitor the availability of your services, and adjust its load balancing accordingly.


Three ways to bridge the IT skills gap in a post-pandemic world

New environments require new expertise. When it comes to cloud, for example, the challenge of building, maintaining and monitoring a complex cloud infrastructure is often beyond the capabilities or knowhow of existing staff. Moreover, the technology landscape shifts so often that many teams simply can’t keep up. According to Gartner, a majority (80%) of today’s workers feel they don’t have the skills required for their current role and future career. Compounding the issue, 53% of business leaders struggle to find candidates with the right abilities during the hiring process. ... Hiring new talent may seem like the first, most obvious solution. This enables organisations to pinpoint the type of candidate they require, and only interview those that will fulfil that need. However, hiring externally is made more difficult when looking for more niche capabilities, and it certainly costs more. The pool of potential candidates is extremely small when recruiting for roles that demand advanced IT skills, like cloud-native orchestration, SAP expertise or DevOps, and organisations end up paying a premium. Another obstacle when looking to hire skills from outside is that next year’s IT budgets are likely to be reduced thanks to Covid-19. While it isn’t wrong to hire new team members to support your existing IT team, and it will indeed be the right choice in certain situations, it certainly isn’t the only answer.


Will Russian Cryptocurrency Law Drive Hacker Recruitment?

Under the law, banks and exchanges in Russia can handle digital currency, provided they register with the Bank of Russia - the country's central bank - and maintain a register of all operators and transactions. The law also states that only institutions and individuals who have declared transactions to authorities can later seek redress in court, for example, if someone steals their cryptocurrency. "In Russia, the use of bitcoin and other crypto assets as a means of payment is prohibited. There are no signs that a change in legislation allowing crypto assets to be used as a means of payment in Russia will be forthcoming," legislator Anatoly Aksakov, the chief backer of legislation designed to regulate the use of cryptocurrency, told Russian radio station Govorit Moskva last month. "Taxation, compulsory declaration - these things are already enforced by law," said Aksakov, who chairs the State Duma - the lower house of the country's parliament - Committee on the Financial Market. And going forward, he predicted "there will only be more and more control over the holding of cryptocurrencies." Security experts say that for years, Russian officials and intelligence agencies have looked the other way when it comes to cybercrime, so long as criminals follow this rule: Never hack Russians or allied countries.


A playbook for modernizing security operations

Most security operations centers are very reactive. Mature organizations are moving toward more proactive hunting or threat hunting. A good example is if you’re sending all of your logs through Azure Sentinel, you can do things like Kusto Query Language and queries in analysis and data sets to look for unusual activity. These organizations go through command line arguments, service creations, parent-child process relationships, or Markov chaining, where you can look at unusual deviations of parent-child process relationships or unusual network activity. It’s a continual progression starting off with the basics and becoming more advanced over time as you run through new emulation criteria or simulation criteria through either red teaming or automation tools. They can help you get good baselines of your environment and look for unusual traffic that may indicate a potential compromise. Adversary emulations are where you’re imitating a specific adversary attacker through known techniques discovered through data breaches. For example, we look at what happened with the SolarWinds supply chain attack—and kudos to Microsoft for all the research out there—and we say, here are the techniques these specific actors were using, and let’s build detections off of those so they can’t use them again.


Overcoming Digital Transformation Challenges With The Cloud

The cloud can enhance information sharing and collaboration across data platforms and digital ecosystems. Deloitte research shows 84% of physicians expect secure, efficient sharing of patient data integrated into care in the next five to 10 years. Real world evidence will be critically important in enhancing digital healthcare with historical patient data, real-time diagnostics, and personalized care. Organizations can leverage the cloud for greater collaboration, data standardization, and interoperability across their ecosystem. Research shows digital business ecosystems using cloud experience greater customer satisfaction rates, with 96% of organizations surveyed saying their brand is perceived better and saw improved revenue growth -- with leaders reporting 6.7% average annual revenue growth (vs. 4.9% reported by others). ... As organizations rely on the cloud, cloud security becomes increasingly important for data integrity and workload and network security. Information leakage, cloud misconfiguration, and supply chain risk are the top concerns for organizations. A federated security model, zero trust approach, and robust cloud security controls can help to remediate these risks, increase business agility, and improve trust.


AI Can Now Identify Humans' Valnerabilities & Use Them To Influence Their Decision Making

A team of researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning. To test their model they carried out three experiments in which human participants played games against a computer. The first experiment involved participants clicking on red or blue coloured boxes to win a fake currency, with the AI learning the participant’s choice patterns and guiding them towards a specific choice. The AI was successful about 70 percent of the time. In the second experiment, participants were required to watch a screen and press a button when they are shown a particular symbol (such as an orange triangle) and not press it when they are shown another (say a blue circle). Here, the AI set out to arrange the sequence of symbols so the participants made more mistakes, and achieved an increase of almost 25 percent. The third experiment consisted of several rounds in which a participant would pretend to be an investor giving money to a trustee (the AI). The AI would then return an amount of money to the participant, who would then decide how much to invest in the next round.


Dark web analysis shows high demand for hackers

The research found that in the vast majority of cases on these forums, most individuals are looking for a hacker, and in 7 out of 10 ads, their main goal is to gain access to a web resource. The research discovered that in 90% of cases, users of dark web forums will search for hackers who can provide them with access to a particular resource or who can download a user database. Only seven percent of forum messages analyzed included individuals offering to hack websites. The remaining three percent of the messages analysed were aimed at promoting hacking tools, programs and finding like-minded people to share hacking experience. Positive Technologies analyst, Yana Yurakova said: “Since March 2020, we have noticed a surge of interest in website hacking, which is seen by the increase in the number of ads on forums on the dark web. This may have been caused by an increase in the number of companies available via the internet, which was triggered by the COVID-19 pandemic. “As a result of this, organizations that previously worked offline were forced to go online in order to maintain their customers and profits, and cybercriminals, naturally, took advantage of this situation.”


Digital Trends 2021: Responsible Business Puts Trust, Ethics, And Sustainability First

Many businesses have done some soul-searching in the wake of the pandemic, political discord, and long-simmering equity demands. Two years ago, Business Roundtable, an association of U.S.-based CEOs, updated its purpose statement of a corporation to "take into account all stakeholders, including employees, customers, and the community,” rather than only profit. Maybe that’s partly why Gartner analysts predicted the emergence of responsible AI, meaning the operationalization of AI accountability across organizations and society. They saw responsible AI as an umbrella term covering many aspects of AI implementations including value, risk, trust, transparency, ethics, fairness, interpretability, accountability, safety, and compliance. Most analysts predicted that sentiment analyses and metrics documenting a company’s contributions to society’s measurements will matter even more in 2021 and over time. Gartner analysts predicted 30 percent of major organizations will use a “voice of society” metric to act on societal issues, and assess the impacts to their business performance by 2024. Turns out what’s damaging to society is damaging to business.


Agile Approaches for Building in Quality

Built-in Quality is a core pillar in agile. If you take Scrum for instance, the team should deliver potentially shippable products. These done increments are to be of sufficient quality. We like to say that quality is built in the product. When working with multiple teams on one product or service, we can apply a scaling agile framework. There are a few scaling agile frameworks, e.g. LeSS, Nexus and SAFe. The latter is most prescriptive, so I like to look at SAFe to answer this question. SAFe states BIQ to be one of its fundamental pillars and advises a few practises: Think test first, automate your tests, have a regression test strategy, set up CI/CD pipelines and embed quality in the development process. The other frameworks are less explicit but expect you to do good Scrum, so with that, they embrace all these development practices as well. ... Agile coaches help teams and organisations to embrace the agile way of working. I think agile coaching evolves into three roles: the agile counsellor, the delivery coach, and the team coach. The team coach typically helps the team with understanding the agile principles and mindset. In this role, the coach can create awareness at the team level for the typical development practices I talked about earlier.



Quote for the day:

"Generosity is giving more than you can, and pride is taking less than you need." -- Kahlil Gibran

Daily Tech Digest - February 11, 2021

Supply-Chain Hack Breaches 35 Companies, Including PayPal, Microsoft, Apple

“The vast majority of the affected companies fall into the 1000+ employees category, which most likely reflects the higher prevalence of internal library usage within larger organizations,” Birsan noted. The researcher received more than $130,000 in both bug bounties and pre-approved financial arrangements with targeted organizations, who all agreed to be tested. The hack’s original target PayPal, as well as Apple and Canada’s Shopify, each contributed $30,000 to that amount. Birsan said he came up with an idea to explore the trust that developers put in a “simple command,” “pip install package_name,” which they commonly use with programming languages such as Python, Node, Ruby and others to install dependencies, or blocks of code shared between projects,. These installers—such as Python Package Index for Python or npm and the npm registry for Node–are usually tied to public code repositories where anyone can freely upload code packages for others to use, Birsan noted. However, using these packages comes with a level of trust that the code is authentic and not malicious, he observed.


Continual Learning will be the Cornerstone to Success

In the Fourth Industrial Revolution, the urgency to future-proof and transition careers has required nothing short of a reskilling revolution. According to global Salesforce research, since the onset of the pandemic 40% of the workforce have considered a career change. As the digital economy continues to evolve, businesses don’t just have a responsibility to provide employees opportunities to retrain and transition to the jobs of the future. It’s increasingly within their interest to do so. Now more than ever, people need access to the technologies and skills necessary to land the jobs of the future. This why at Salesforce we launched Trailhead in 2014, our free online learning platform, to democratise education and provide an equal pathway into the tech industry. Since the onset of the pandemic we’ve seen a 37% increase in registrations to courses – joining over 2.2 million learners gaining technical, business, partner, and soft skills. Delivering in-demand skills and resume-worthy credentials, we’re addressing the skills imperative and equipping people with the tools they need to succeed. As a society, we need to continually ask ourselves whether we are doing enough to provide everyone with the opportunity to participate. 


Artificial Intelligence In The Corporate Boardroom

With respect to accountability – human directors’ decision-making should not be replaced or influenced by unaccountable artificial intelligence’s decision-making. I warn that using artificial intelligence to make decisions in boardrooms could lead to a void of accountability. The use of artificial intelligence in boardrooms could raise other issues as well. ...  Human directors, who have consciousness and a conscience, would be accountable; whereas I do not know how AI-directors could effectively be held accountable. This would be an instance in which the risk that directors lose their independent judgment intertwines with the accountability issues possibly arising from the use of artificial intelligence in corporate boardrooms. ... Philosophers warn us that if artificial intelligence developed a conscience and consciousness, it could also possibly experience suffering. Uber-intelligence could lead to uber-suffering. As I wrote in my article, “no potential benefits resulting from the use of AI in the boardrooms, in corporate governance, or in other settings could be worth the risk that artificial agents could suffer; even more drastically, no potential benefit resulting from the use of AI is worth the risk that relations between natural beings and artificial beings could evolve into exploitative relations.”


Developers: This is the one skill most likely to get you hired, according to IBM

The conclusion falls in line with the findings of a recent study by the Linux Foundation, which found that hiring managers are 70% more likely to hire a professional with knowledge of open cloud technologies. At the same time, the same report showed that 93% of respondents were struggling to find sufficient talent with open-source skills. Mastering open-source tools and programming libraries can add a lot of value to a developers' CV, therefore. Among the most important tools to add to developers' skillset, Linux featured prominently, with an overwhelming 95% of developers saying they considered the technology to be important to their career; but the understanding of containers and databases also ranked high. IBM's latest research comes in the midst of increasing interest in open-source software, and a desire to tap the technology to create value. Not-for-profit think tank the OpenForum Europe recently found that the open-source ecosystem was contributing up to €95 billion ($113.7 billion) per year to the EU's GDP; and that even a marginal increase of activity could boost the continent's wealth by hundreds of billions of euros. 


Is it time to ban ransomware insurance payments?

Erin Kenneally, director of cyber risk analytics at Guidewire, and previously a staffer in the US Department of Homeland Security’s cyber division, says dialogue is needed to disincentivise both the supply-side and the demand-side for ransomware payments – banning insurance payments would evidently fall under the former approach. She also highlights that current light touch interventions for ransomware have been shown to be ineffective. “The US, for example, has issued an Office of Foreign Assets Control [OFAC] advisory on the sanction risks of paying ransoms and a FINCEN Advisory on reporting ransomware red flag indicators. To date, there have been no civil penalties levied against victim companies, insurers or response firms for paying or facilitating the payment of cyber extortion,” she says. “In a nutshell, since the ransom is often lower than the cost of recovery, business interruption and lost business – the convergence of which can spell financial death – many victims and insurers simply pay the ransom and risk sanctions. “As a result, insurers have taken a rational economics approach to ransomware payments, leading to a growing sentiment that the industry is worsening the problem by paying extortions.”


Are Autonomous Businesses Next?

The most extreme form of automation is an autonomous system that operates without human intervention. That's not to say that autonomous systems don't need oversight, however."Automation is a necessary, functional component of an autonomous system. 'Autonomous' implies a degree of artificial intelligence, decision making that is not necessarily rule or workflow based, rather taking actions based on new patterns that are not hard coded into the system," said Robert Greene, senior director, Oracle Autonomous Database product management. "Automation…still requires a human to make the decision to invoke [an] action, so a human is still in the loop." Organizations are automating more tasks using robotics process automation (RPA) and in some cases, they're inheriting autonomous capabilities from the enterprise products they use such as the Oracle Autonomous Database. "You start out by automating smaller steps with smaller stakes, so your organization builds its internal capacity to do automation well and learn how to make it work in hybrid situations that involve people," said Chris Nicholson, founder CEO of deep reinforcement learning solution provider Pathmind. 


Digital transformation: Leadership imperatives for 2021

Digital tools, used appropriately and effectively, can contribute to planning and monitoring internal processes, increasing transparency and accountability across all levels of management, and building customers’ trust. Digital tools are not only helping leaders solve complex issues related to personnel and minimizing operational costs, but also improving decision making. However, leaders will have to verify the suitability of tech tools being implemented in relation to organizational needs and objectives. These are not top-down decisions. Leaders promoting open ways of working in their organization could make this a more inclusive and participatory process by adopting and implementing an approach such as the Open Decision-Making Framework. One key factor to remember: While digital technologies have much potential to improve organizational processes, leaders must take proactive actions and measured steps to help employees internalize and integrate these processes. The easier that leaders make it for employees to adapt to and use new technology in their daily routines, the faster the integration. The hardest part is often the change management: Leaders need to facilitate this in a way that instills a positive attitude in employees.


How the SRE Role Is Evolving

First, not all companies have embraced an SRE model. A recent study by Blameless found “… 50% of respondents employ an SRE model with dedicated engineers focused on infrastructure and tooling, or an embedded model where full-time SREs are assigned to a service.” The SRE model is gaining momentum, but there is still room for greater adoption. There is also room for internal growth. Ostrowski sees a single SRE team as a single point of failure. “It needs to be a whole department,” he said. In addition, SREs are gaining a more prominent voice at the table, influencing feature rollout. “With proper and mature SRE involvement, teams can’t willy-nilly deploy,” he said. Ostrowski views these teams as maintaining a critical balance between business risk and introducing new technology. Many companies are experiencing rising user demands, and thus must rapidly scale their application networks. Simultaneously, there has been a Cambrian explosion of deployment types — systems could be using any assortment of legacy infrastructure, mainframe, microservices, cloud environments and multiple cloud vendors. “The complexity and topology of the IT space has grown substantially, with many interdependencies,” Ostrowski said.


Data Science vs Business Intelligence, Explained

You will recognize business intelligence by its charts, dashboards, database diagrams, and data integration projects. It is expensive and frustrating -- but indispensable. BI has a permanent advantage over DS because it has concrete data points; few, simple assumptions; self-explanatory metrics; and automated processes. Furthermore, BI will never go away. It will always be a work in progress because you will never stop changing your business or upgrading and replacing the source systems. ... Looking in the rearview mirror of data is important and helpful, but it's limited and will never get you where you want to go. At some point you need to look ahead. BI needs to be accompanied by data science. DS is a complicated, sophisticated form of planning and optimization. Examples include: Predicting in real time which product a customer is most likely to buy; Forming a weighted network between business micro events and micro responses so that decisions can be made without human intervention, then updating that network with every outcome so that it learns as it acts; Forecasting at the SKU level, by day, with every sale; Identifying and predicting rare events, such as credit card fraud, and sending automatic notifications to customers and/or staff;


Piercing the Fog: Observability Tools from the Future

When we talk about observability, there are two sets of tools: specific observability tools, such as Zipkin and Jaeger, as well as broader application performance monitoring (APM) tools such as DataDog and AppDynamics. When monitoring systems, we need information from all levels, from method and operating system level tracing to database, server, API call, thread, and lock data tracing. Asking developers to add instrumentation to get these statistics is costly and time consuming and should be avoided whenever possible. Instead, developers should be able to use plugins, interception, and code injection to collect data as much as possible. APM tools have done a pretty good job of this. Typically they have instrumentation (e.g. Java agents) built into program languages to collect method-level tracing data, and they have added custom filter logic to detect database, server, API call, thread, and lock data tracing by looking at the method traces. Furthermore, they have added plugins to commonly used middleware tools to collect and send instrumented data. One downside of this approach is that the instrumentation needs will change as programming languages and middleware evolve.



Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - February 10, 2021

Journey to Cloud Still Too Complex

“It’s very easy as a technology provider to think that you know the right way to do something and to tell customers that if they would just do it your way, that everything would be easy. By the way, Oracle has definitely been guilty of this in the past. So it’s not as though we have not made this mistake, but what is very clear to us is that everyone wants the benefits of the cloud. Everyone’s going there. “And the reason it’s slow is because it’s too hard. I have this conversation differently. In the transition from when we used to have the kind of old-school flip phones, it took about 17 seconds for everyone to have a smartphone. Why? Because the transition was easy. It was better. “Today, everyone knows the cloud is better, but the transition’s already taken, I don’t know, a decade and we’re at 15% market penetration. So what that’s telling us as a cloud provider is that we have to make it much, much easier if we’re going to give customers those benefits quickly. “And I think when you really take that customer first approach, and you really have that customer empathy, and you understand why it’s difficult, that’s where you see all of these deliverables. It’s why you see the different ways you have to build it for customers. And you also see that around, for example, the multicloud approach.”


How to Improve Data Quality by Using Feedback Loops

If you want to tackle poor Data Quality at its source, it helps to connect those creating the data with the people who use it, so they understand each other’s needs and tasks better. Going back to my example above, if we could facilitate a conversation between the sales consultant and a data analyst, I am sure the sales consultant would better understand how important high-quality data is for the data analyst. Similarly, the analyst could see opportunities to improve the data collection process in customer-facing roles to help their colleague produce much-needed data. In my work with analytics and data communities in organizations across the world, I have seen that bringing people from different roles together and encouraging them to learn from each other can make significant contributions to building a data culture. For Data Quality, a similar approach can work. Why not connect the customer-facing staff who enter data with those who analyze it? Whether it’s the sales consultant for a mobile phone provider, the nurse or front office staff in a hospital, or the bank teller — each of them gathers data from customers, patients, and clients, and the better their process is, the better the resulting Data Quality.


Batteries From Nuclear Waste That Last Thousands Of Years

Functionally, the concept of a diamond battery is similar to that of radioisotope electric generators used for space probes and rovers. The diamond battery generates electric charge through direct energy conversion using a semiconductor diode and C-14 as a radioisotope source. C-14, while undergoing decay, emits short-range low-energy beta particles (essentially the nucleus’ version of electrons) and turns into nitrogen-14, which is not radioactive and gets absorbed in the diamond casing. The beta particles released by C-14, moving with an average energy of 50keV, undergo inelastic collisions with other carbon atoms present in the diamond structure, causing electrons to jump from the valence band to the conduction band, leaving behind holes in the valence band where electrons were earlier present. Successive electron-hole pairs get collected at the metal contact attached to the diamond. C-14 is preferred as the source material because its beta particle radiation is easily absorbed by any solid. ... The amount of C-14 used in each battery is yet to be disclosed, but what is known is that a battery containing 1gm of C-14 would deliver 15 joules (15J) per day, much less than an AA battery, and would take 5,730 years to reach fifty per cent power.


When it comes to vulnerability triage, ditch CVSS and prioritize exploitability

The large number of vulnerabilities returned by automated scans is not a new problem. In fact, it is commonly cited by developers as an obstacle to security. To attempt to filter through these large data sets, developers conduct vulnerability triage where they categorize the flaws that have been detected in order of risk they pose to an application’s security or functionality. Then, developers can fix vulnerabilities that seem to be most pressing in order to get software out the door faster. Currently, many developers rely on the Common Vulnerability Scoring System (CVSS). The system represents a basic standard for assessing the severity of a vulnerability. Scores range from 0-10, with 10 being the higher end of the scale (indicating the highest severity). Developers will often assign CVSS scores to the vulnerabilities they detect and order them from highest to lowest, focusing their efforts on those with the highest scores. Unfortunately, this method is suboptimal, ultimately resulting in oversights and less “safe” code. A large part of getting the most out of security scanning tools comes down to a developer’s approach to triaging the vulnerabilities scans detect.


Migrating Monoliths to Microservices With Decomposition and Incremental Changes

The problem with a distributed monolith is that it is inherently a more distributed system, with all the associated design, runtime, and operational challenges, yet we still have the coordination activities that a monolith demands. I want to deploy my thing live, but I can't. I've got to wait till you've done your change, but you can't do your change because you're waiting on somebody else. Now, we agree: “Okay, well, on 5 July, we're all going to go live. Is everybody ready? Three, and two, and one, and deploy.” Of course, it always all goes fine. We never have any issues with these types of systems. If an organization has a full-time release-coordination manager or another job along those lines, chances are it has a distributed monolith. Coordinating lockstep deployments of distributed systems is not fun. We end up with a much higher cost of change. The scopes of deployments are much larger. We have more to go wrong. We also have this inherent coordination activity, probably not only around the release activity but also around the general deployment activity. Even a cursory examination of lean manufacturing teaches that reducing handoffs is key to optimizing throughput.


Why Open Source Project Maintainers are Reluctant to use Digital Signatures 2FA

Why not? Most respondents said not including 2FA was a lack of decision rather than a decision. Many were either unaware it was an option or that because it is not the default behavior, it was not considered, or was considered too restrictive to require. “It wasn’t a decision, it was the default.” Some of the detailed answers to the survey showed that security was not job number one to many developers. They didn’t see any “need for [2FA on] low-risk projects.” Other projects, with a handful of contributors, said they didn’t see the need at all. And, as in the case with so many security failure rationalizations, many thought 2FA was too difficult to use. One even said, “Adding extra hoops through which to jump would be detrimental to the project generally. Our goal is to make the contribution process as easy as possible.” As for digital signatures, in which released versions come with cryptographically signed git tags (“git tag -s”) or release packages, so that users can verify who released it even if the distributing repo might be subverted, they’re not used anywhere near as often as they should be either. 41.53% don’t use them at all while 35.97% use them some of the time. A mere 22.5% use them all the time.


Machine Learning for Computer Architecture

The objective in architecture exploration is to discover a set of feasible accelerator parameters for a set of workloads, such that a desired objective function (e.g., the weighted average of runtime) is minimized under an optional set of user-defined constraints. However, the manifold of architecture search generally contains many points for which there is no feasible mapping from software to hardware. Some of these design points are known a priori and can be bypassed by formulating them as optimization constraints by the user (e.g., in the case of an area budget constraint, the total memory size must not pass over a predefined limit). However, due to the interplay of the architecture and compiler and the complexity of the search space, some of the constraints may not be properly formulated into the optimization, and so the compiler may not find a feasible software mapping for the target hardware. These infeasible points are not easy to formulate in the optimization problem, and are generally unknown until the whole compiler pass is performed. As such, one of main challenges for architecture exploration is to effectively sidestep the infeasible points for efficient exploration of the search space with a minimum number of cycle-accurate architecture simulations.

How businesses can use AI to get personalisation right

It’s not just the AI engine that is key to success. The way in which your content is administered and managed will play a huge part. In order to quickly serve up the right content, AI needs to be able to identify, retrieve and render it, and having the right content structure is key to the success to this. Content that may have traditionally lived only in the context of an authored web page doesn’t always provide the level of granularity needed to be of any use for personalised content. This is certainly true of product-based personalisation, which, in turn, requires a product-based content structure to enable personalisation engines to read individual data attributes and assemble them in real time. Meticulous metadata is also essential to this process. Metadata is the language that AI understands; it describes the attributes of a product such as category, style and colour. Without the right metadata, personalisation engines cannot identify the right content at the right time. Fast fashion retailers, such as Asos and Boohoo, are leading the way in personalising the presentation of products to customers in this way. Artificial intelligence is human taught. This is the most basic thing to remember when considering an AI implementation of any kind.

Microsoft Viva Heralds New Wave of Collaboration Tools

Viva Insights is available now in public preview. At an individual level, it is designed to help employees protect time for regular breaks, focused work, and learning. At a management level, leaders are able to see team trends (aggregated and deidentified to protect privacy). The analytics offers recommendations to better balance productivity and well-being, according to Microsoft. Viva Learning, available now in private preview, aggregates all the learning resources available in an organization into one place and makes them more discoverable and accessible in the flow of work, according to Microsoft. It incorporates content from LinkedIn Learning and Microsoft Learn as well as from third-party providers such as Coursera and edX, plus content from each company's own private content library. Viva Topics is now available as an add-on to Microsoft 365 commercial plans and makes corporate knowledge easier to discover. It uses AI to surface "topic cards" within conversations and documents across Microsoft 365 and Teams, Microsoft said. Clicking on a card opens a topic page with related documents, conversations, videos, and people. Microsoft Viva is not so much new technology as it is an augmentation of Microsoft 365 and Microsoft Teams, repackaged with some new capabilities that make existing features easier to find and consume, according Gotta.

Building an MVP That Is Centered on User Behavior

An MVP is used to test and learn from user behavior. It is used to understand whether the product strategy is attuned to solving the user’s problems and whether user expectations are aligned to it. To get the maximum learning from user responses, it is necessary to highlight key differentiators that the product is offering. The users should be able to dive straight into what is being offered so that they can realize its true value. Also, it will help understand whether the product will be able to withstand competition who are offering similar or lesser alternatives. The onus is upon the ideators to identify the key differentiators. Proper communication of the differentiators will help the engineering team or the MVP development company to quickly build a functional MVP. It will help accelerate the MVP loop of Build -> Measure -> Learn. A Minimum Viable Product is like a shot at creating a first impression on the prospects and stakeholders. It helps gauge their initial reactions and also the need for subsequent improvisations. However, their initial reactions cannot be read or deciphered using facial emotions or verbal remarks. Only data can reveal how users interact and use the MVP. It will pave the way for future construction and improvisation of the final product.



Quote for the day:

"Forget your weaknesses, increase your strengths and be the most awesome you, that you can be." -- Tim Fargo

Daily Tech Digest - February 09, 2021

Digital transformation strategy: 7 factors to re-examine about yours now

In the rush to adjust to work-from-home orders, seismic supply and demand shifts, changing customer and partner needs, and a global health crisis, some shortcuts may have been taken, or longer views set aside. Heading into 2021, IT leaders can take a step back to reassess some important aspects of their digital transformation efforts to make sure they’re on the right track not only for 2021 but beyond. ... While the urgency to transform was necessary, some initiatives conceived or implemented in haste may deserve a second or even third look. “Some may have implemented changes at a pace that didn’t allow for the standard level of care and detail that would normally go into digital transformation projects. Others pivoted away from their technical roadmap,” says Greg Stam, managing director in the CIO advisory at digital business consultancy AHEAD. “It’s critical to re-baseline your digital transformation strategy, starting with any new business goals.” ... “As so many IT leaders scrambled to implement new technologies to help employees remain productive and connected from home, some may be starting to find that the tools they implemented aren’t really serving their true purpose,” says Rob Wiley


Understanding Linus's Law for open source security

Some people assume that because major software is composed of hundreds of thousands of lines of code, it's basically impossible to audit. Don't be fooled by how much code it takes to make an application run. You don't actually have to read millions of lines. Code is highly structured, and exploitable flaws are rarely just a single line hidden among the millions of lines; there are usually whole functions involved. There are exceptions, of course. Sometimes a serious vulnerability is enabled with just one system call or by linking to one flawed library. Luckily, those kinds of errors are relatively easy to notice, thanks to the active role of security researchers and vulnerability databases. Some people point to bug trackers, such as the Common Vulnerabilities and Exposures (CVE) website, and deduce that it's actually as plain as day that open source isn't secure. After all, hundreds of security risks are filed against lots of open source projects, out in the open for everyone to see. Don't let that fool you, though. Just because you don't get to see the flaws in closed software doesn't mean those flaws don't exist. In fact, we know that they do because exploits are filed against them, too. The difference is that all exploits against open source applications are available for developers (and users) to see so those flaws can be mitigated.


World Economic Forum calls cybersecurity one of the "key threats of the next decade"

The analysts behind the report called cybersecurity failure among the "highest likelihood risks" of the next 10 years and IT infrastructure breakdown "among the highest impact risks of the next decade." In a survey of experts included in the report, 39% of respondents said cybersecurity failure was a critical threat to the world right now and ranked as the most pertinent risk on the list after infectious disease, extreme weather events, and livelihood crises. Nearly 50% said it would be a concern for the next three to five years. The report suggests that in order to make the transition to a fully digital world more smooth, multiple things need to be changed, including "insisting on security and privacy by design in the development of new technologies and digital services." Hitesh Sheth, president and CEO at cybersecurity firm Vectra, said the only surprise in the World Economic Forum Global Risks Report is that cybersecurity failure wasn't ranked higher. "Without secure, high-functioning IT, addressing all the other crises the report names, from climate to digital inequality, becomes much harder. For years our well-understood cyber vulnerabilities have been met with too much rhetoric, too little real action," Sheth said.


UK's leading AI startup and scaleup founders highlight the main pain points of running a fast growth business in the AI sector

“Finding enough time to really invest in strategy” is a significant challenge, according to Miriam Cha, COO and co-founder at Rahko. “We work in two very rapidly evolving areas — AI for drug/material discovery and quantum computing — so developing and continually adapting and refining a strategy that will win requires a lot of careful thought and deep discussion. “We have four founders at Rahko, and we come together very regularly for strategy sessions that can last several days, with the understanding that no one does anything else until we have answered the questions we need answered. This has taken a huge amount of discipline to maintain, but has meant that we are able to make really well thought-out decisions and head in what we believe between us to be the right direction.” Ky Nichol, CEO at Cutover, adds that “it’s hard to maintain focus on strategic goals, with new opportunities and use cases for our capabilities emerging constantly, it’s important to maintain a resilient perspective and prioritise our strategic objectives.” Tim Weil, CEO at Navenio, also agrees that developing a solid business strategy with a team that is all moving in the right direction is essential for the success of any fast growth business, in the AI sector or otherwise.


The future of work: Coming sooner than you think

Zero trust is a general framework in which every user and every system must authenticate itself continually, so if a breach occurs, attackers can’t move laterally to compromise other systems across the organization. SASE is a more recent scheme that combines SD-WAN and security into a single, simplified cloud service that can be scaled easily. Together, they can go a long way to reduce the risks incurred by remote work at scale. But there’s more to a bright future of work than technology solutions. Effective remote management, an area where software development managers tend to have extensive experience, may be most important of all. InfoWorld contributor and former CIO Isaac Sacolick has been there, and in “7 best practices for remote development teams,” he outlines some tried-and-true techniques – including continuous, transparent planning. Sacolick also observes that automation can help simplify remote development, such as automated testing and change management. It’s important to acknowledge, though, that not all jobs can be remote. Network World contributor Zeus Kerravala pinpoints the skills necessary to run the data center of the future in “How the data center workforce is evolving,” which cites an Uptime Institute study predicting a 15% rise in on-prem data center jobs over six years.


Quantum Leap: Scientists Build Chip That Can Handle Thousands Of Qubits

Quantum computers are at a similar stage that classical computers were in their 40s when machines needed control rooms to function. However, this chip, according to the scientists, is the most advanced integrated circuit ever built to operate at deep cryogenic temperatures. “The quantum computers that we have now are still lab prototypes and are not commercially relevant yet. Hence, this is definitely a big step towards building practical and commercially relevant quantum computers,” said Mr Viraj Kulkarni, “But I think that we are still far away from it. “This is because of the ‘Error Correction’. Any computing device always has errors in it and no electronic device can be completely perfect. There are various techniques that computers use to correct those errors. “Now the problem with quantum computing is that qubits are very fragile. Even a slight increase in temperature, vibrations, or even cosmic rays can make qubits lose their quantumness, and this introduces errors. So the key question of whether we can really control these errors is still relevant.” Nivedita Dey, research coordinator at Quantum Research and Development Labs, said the qubit noise is still a roadblock in developing quantum computers.


A Beginners Guide to Using Django’s Impressive Data Management Abilities

Django is a Python Web framework which helps developers to bring applications from concept to completion as fast as possible.¹ A High-level framework like Django offers a comprehensive set of features for web development like an HTTP application server, a storage mechanism such as a database, a template engine, a request dispatcher and an authentication module. As I mentioned before, I try to focus on the part of Django that allows you to interact with your relational databases, the Object-Relation Mapper (ORM).³ The ORM brings you all functions to create and manipulate data and tables of your database without using any SQL commands. What I tried to explain with the following image, each Django App includes a module named mocels.py, which defines the structure of the database tables you want to create. To translate the Python Objects into database tables, the ORM comes into place. The ORM is responsible for the communication with your database, which includes the translation of your defined models into the right database structure and the execution of any database operations.


A Day with Intel on Hacking and Scaling Machine Learning with Open Source

Machine learning models are designed to be resilient, flexible and meet business goals but often, engineers who build the product and algorithms face obstacles to ensure that it works reliably, quickly and at scale. Frameworks are not easy to use. As many of the world’s leading organizations embrace approaches to scaling machine learning, Intel is offering the tools, applications and hardware to make it easier for developers to build, deploy and manage artificial intelligence and machine learning models that can be used by tens of thousands of people instead of just a few. ... Join us on Feb. 10 at 9 a.m. PT for a live Day of Machine Learning with Intel discussion, where we’ll dive deeper into oneAPI. We’ll explore the software at scale issues with machine learning and the hardware needed for it. We’ll look at the tools and the infrastructure that is used for developing, deploying and managing the algorithms. We’ll also dive into questions around how Intel’s oneAPI toolkit is a way to resolve problems that teams face, and how oneAPI fits with existing frameworks such as PyTorch and TensorFlow.


What's New in IT Security?

If the plan is to install a new software or security package, or to update software from a vendor across a plurality of devices, the coordination of the software or software upgrade release should be uniformly executed across all end users and locations, and across all devices and platforms. Commercially available software distribution platforms are available to assist with this task. The preferred method of performing software and security upgrades is a “push” distribution of any new software release in which IT pushes out the new software or software upgrade to the end device, network or platform automatically. This is in contrast to the “pull” method that notifies the user that a new version of software is available, but that depends upon the user to pull or download the new release onto his or her device or network. “Pull” is the better methodology because you don’t have to worry about users failing to perform a download, leaving themselves (and the company) open to security vulnerabilities that a new software release can resolve. The SolarWinds compromise occurred because malware had gotten embedded in a software release that clients were installing. The lesson for IT is to vet your vendors’ security practices as they pertain to data centers, operational software, business partners and the end products that they are selling to you.


Hacker Breached Florida City's Water Treatment System

A hacker breached a Florida city's water treatment network on Friday, increasing the amount of lye that would have been added to the water to a dangerous level. But city officials in Oldsmar, Florida, say they were able to spot the intrusion and quickly reverse the setting before it took effect. Reuters reports that the intruder was able to access the water treatment network software after first gaining access to TeamViewer remote access and control software. "Importantly, the public was never in danger," Pinellas County Sheriff Bob Gualtieri said during a Monday press conference. Oldsmar, Florida, which is about 17 miles northwest of Tampa, has a population of about 15,000. In recent years, officials have focused increasing attention on the security of industrial control systems used to manage municipalities' electricity and water. Such systems often are connected to the internet and could pose vast public safety risks if infiltrated by hackers. Questions will likely now be raised about how the city used and configured TeamViewer for remote access, including which access controls were in place. TeamViewer has long been an attractive target because it's designed to give administrators full, remote access to and control of systems.



Quote for the day:

"At the heart of great leadership is a curious mind, heart, and spirit." -- Chip Conley

Daily Tech Digest - February 08, 2021

The Future of Healthcare Is in the Cloud

While the idea of making information accessible anywhere and at any time offers obvious advantages, there are obstacles to overcome. Potential security risks and concern over compliance has long held back cloud adoption in healthcare. IT staff need to ensure timely software updates, maintain network availability, and institute a regular and robust backup routine. Healthcare organizations also need to consider how data will be processed by a third party, examine with whom their cloud partners are in business, and ensure security standards extend to any cloud networks they use. Cloud providers with healthcare experience and an understanding of the unique compliance landscape will be favored as the industry rises to meet these challenges. Everyone should take comfort from the fact that the most advanced healthcare organizations in the world have announced major cloud initiatives after much deliberation and due diligence. Mayo Clinic’s announcement of its partnership with Google is one such example. The dream of global collaboration relies on cloud computing. Healthcare professionals in different countries can now trade massive data sets easily. While collaboration like this has typically been reserved for esoteric research projects, it’s now being employed to tackle global health problems.


Q&A: Telehouse engineering lead discusses AI benefits for data centres

Developments in network management AI and cyber security are allowing us to detect unusual activity outside of usual traffic patterns. In a typical office environment, if a company device logs in at 3am and starts taking gigabytes of data from the business, that will be flagged as atypical behaviour. AI can analyse this breach quickly and respond by disabling that device’s network access to stop the possible data loss. That data transfer could also take place in the middle of the working day, but it might come from a device that would not normally transfer that volume of data, such as a laptop solely used for presentations. The AI already understands the typical behaviour patterns of that device and will flag when there might be inflow or outflow of data that does not fit its typical usage pattern. In a data centre it is no different. Every server has its own typical operational pattern, and these can be monitored by the cyber security systems, and any unusual activity can be flagged. It is possible to take this further than simple network monitoring by interfacing with other systems. For example, detecting whether server behaviour changed after someone entered a secure server hall, which could indicate that a server has been tampered with.


The year ahead in DevOps and agile: bring on the automation, bring on the business involvement

The slower-than-desired pace of automaton stems from "organizations prohibiting developers from accessing production environments, probably because developers made changes in production previously that caused production problems," says Newcomer. "It's hard to change that kind of policy, especially when incidents have occurred. Another reason is simple institutional inertia - processes and procedures are difficult to change once fully baked into daily practice, especially when it's someone's specific job to perform these manual deployment steps." DevOps and agile progress needs to be well-measured and documented. "People have different definitions of DevOps and agile," says Lei Zhang, head of Bloomberg's Developer Experience group. Zhang's team turned to the measurements established within Google's DevOps Research and Assessment guidelines -- lead time, deploy frequency, time to restore, and change fail percentage, and focus on the combination. "We think the effort is cohesive, while the results have huge varieties. Common contributors to such varieties include complex dependencies due to the nature of the business, legacy but crucial software artifacts, compliance requirements, and low-level infrastructure limitations." 


Performance Tuning Techniques of Hive Big Data Table

Developers working on big data applications have a prevalent problem when reading Hadoop file systems data or Hive table data. The data is written in Hadoop clusters using spark streaming, Nifi streaming jobs, or any streaming or ingestion application. A large number of small data files are written in the Hadoop Cluster by the ingestion job. These files are also called part files. These part files are written across different data nodes, and when the number of files increases in the directory, it becomes tedious and a performance bottleneck if some other app or user tries to read this data. One of the reasons is that the data is distributed across nodes. Think about your data residing in multiple distributed nodes. The more scattered it is, the job takes around “N * (Number of files)” time to read the data, where N is the number of nodes across each Name Nodes. For example, if there are 1 million files, when we run the MapReduce job, the mapper has to run for 1 million files across data nodes and this will lead to full cluster utilization leading to performance issues. For beginners, the Hadoop cluster comes with several Name Nodes, and each Name Node will have multiple Data Nodes. Ingestion/Streaming jobs write data across multiple data nodes, and it has performance challenges while reading those data.


AI Support Bots Still Need That Human Touch

At the core of providing effective support for critical issues is personalized, expedited service. In contrast to the wholesale outsourcing of frontline support to bots with little documentation, by combining best practices, best-of-breed technology and a trained staff of experts, this hybrid approach offers the best option for delivering and maintaining mission-critical networks. When a network administrator has an issue that is beyond the automated self-healing functions, the first call should be readily available and start with a dedicated support expert, who will know exactly how the network is configured, its history, and they should have all of the pertinent incident data at the proverbial fingertips. Issues can then be quickly resolved and in the event that something unexpected pops up, it can be handled without having to start all over again. Today, networks are being stressed like never before with remote work, IoT, cloud migration, and so forth, spawning novel, unforeseen issues that cannot be handled by limited AI-based tools. During these periods, accessing an engineer with intimate knowledge of the system and configuration on-site will be the lifeline network teams need to help diagnose and resolve these types of challenges. Is this simply a luddite view of Artificial Intelligence?


Hidden Dangers of Microsoft 365's Power Automate and eDiscovery Tools

Power Automate and eDiscovery Compliance Search, application tools embedded in Microsoft 365, have emerged as valuable targets for attackers. The Vectra study revealed that 71% of the accounts monitored had noticed suspicious activity using Power Automate, and 56% of accounts revealed similarly suspicious behavior using the eDiscovery tool. A follow-up study revealed that suspicious Azure Active Directory (AD) Operation and Power Automate Flow Creation occurred in 73% and 69%, respectively, of monitored environments. ... Microsoft Power Automate is the new PowerShell, designed to automate mundane, day-to-day user tasks in both Microsoft 365 and Azure, and it is enabled by default in all Microsoft 365 applications. This tool can reduce the time and effort it takes to accomplish certain tasks — which is beneficial for users and potential attackers. With more than 350 connectors to third-party applications and services available, there are vast attack options for cybercriminals who use Power Automate. The malicious use of Power Automate recently came to the forefront when Microsoft announced it found advanced threat actors in a large multinational organization that were using the tool to automate the exfiltration of data. This incident went undetected for over 200 days.


The ‘It’ Factors in IT Transformation

Shadow IT has been the bane of many-a-CIO for as far as I can remember. But how many organizations focus on complete business IT alignment where the operating model supports proactively eliminating business operation disruptions as opposed to meeting internal IT SLAs? The best way to generate this elusive value from an IT revamp is to use existing concepts and add vital new ones to get transformational results. And the outcome? A business that can comfortably jump barriers and leapfrog competitors for whom IT is an afterthought. So, let’s break this down a bit. What are the “it” factors that separate a successful IT transformation from the ones with relegated outcomes? For starters, in the former, IT leaders address every critical part of the whole and the framework encourages C-level executives to take the plunge. Enterprise executives sometimes get cornered by organizational dynamics into playing it safe, into taking baby steps. Unfortunately, though, as former British Prime Minister David Lloyd George so appropriately puts it, “You can’t cross a chasm in two small jumps.” Committing to a well-planned yet courageous leap is critical for success from the very onset.


Organizations can no longer afford a reactive approach to risk management

“Business leaders must be vigilant in scanning for emerging issues and make actionable plans to adjust their strategies and business models while being authentic in fostering a trust-based, innovative culture and the organizational resilience necessary to successfully navigate disruptive change. Digitally mature companies with an agile workforce were ready when COVID-19 hit and are the ones best positioned to continue to ride the wave of rapid acceleration of digitally driven change through the pandemic and beyond.” Consistent with the survey’s findings in previous years, data security and cyber threats again rank in the top 10 risks for both 2021 and 2030. The continuously evolving nature of cyber and privacy risks underscores the need for a secure operating environment in which nimble workforces can regularly refresh the technology and skills in their arsenal to remain competitive. “If there’s any risk that all organizations across industries and geographies must maintain focus on, it’s cybersecurity and privacy,” said Patrick Scott, executive VP, Industry Programs, Protiviti. “While the areas that businesses will need to address may change as they transform their business models and increase their resiliency to face the future confidently, cybersecurity and privacy threats will remain a constant and should be at or near the top of the list.”


Digital Transformation Demands a Culture of Innovation

Research done over the past five years by the Digital Banking Report finds that corporate culture is much more important than the size of the company, level of investment, geographic location or even regulatory environment. The question becomes: How can leaders build and reinforce an innovation culture within their organization? According to research by Jay Rao and Joseph Weintraub, professors at Babson College and published in the MIT Sloan Management Review, an innovative culture rests on a foundation of six building blocks. These include resources, processes, values, behavior, climate and success. Each of these building blocks are dynamically linked. The research by the professors is aligned with insights found recently by the Digital Banking Report which shows that increasing investment, changing processes and measuring success is imperative … but not enough. Organizations must also focus on the overarching company values, the actions of people within the organization (behaviors), and the internal environment (climate). These are much less tangible and harder to measure and manage, but just as important to the success of innovation and the ability to create a sustained competitive advantage.


How Enterprise AI Use Will Grow in 2021: Predictions from Our AI Experts

Hillary Ashton, the chief product officer at data analytics vendor Teradata, said that AI will be helpful in 2021 for many companies as businesses look toward reopening and recouping sufficient revenue streams as the COVID-19 pandemic slowly releases its grip on the world. “They’ll need to leverage smart technologies to gather key insights in real-time that allow them to do so,” said Ashton. “Adopting AI technologies can help guide companies to understand if their strategies to keep customers and employees safe are working, while continuing to foster growth. As companies recognize the unique abilities of AI to help ease corporate policy management and compliance, ensure safety and evolve customer experience, we'll see boosted rates of AI adoption across industries." That will also involve using AI to boost safety and compliance measures inside offices, she said. “As companies look to eventually return in some form to the office, we'll see investments in AI rise across the board. AI-driven algorithms can scour meeting invites, email traffic, business travel and GPS data from employer-issued computers and cell phones to give businesses advance warnings of certain danger zones or to quickly halt a potential outbreak at a location. ..."



Quote for the day:

"A casual stroll through the lunatic asylum shows that faith does not prove anything." -- Friedrich Nietzsche