Daily Tech Digest - November 11, 2020

The Role of Relays In Big Data Integration

The very nature of big data integration requires an organization to become more flexible in some ways; particularly when gathering input and metrics from such varied sources as mobile apps, browser heuristics, A / V input, software logs, and more. The number of different methodologies, protocols, and formats that your organization needs to ingest while complying with both internal and government-mandated standards can be staggering. ... What if, instead of just allowing all of that data to flow in from dozens of information silos, you introduced a set of intelligent buffers? Imagine that each of these buffers was purpose-built for the kind of input that you needed to receive at any given time: Shell scripts, REST APIs, federated DB’s, hashed log files, and the like. Let’s call these intelligent buffers what they really are: Relays. They ingest SSL encrypted data, send out additional queries as needed, and provide fault-tolerant data access according to ACL’s specific to the team and server-side apps managing that dataset. If you were to set up such a distributed relay architecture to deal with your big data integration chain, it might look something like this


Malware Hidden in Encrypted Traffic Surges Amid Pandemic

Ransomware attacks delivered via SSL/TLS channels soared 500% between March and September, with a plurality of the attacks (40.5%) targeted at telecommunication and technology companies. Healthcare organizations were targeted more so than entities in other verticals and accounted for 1.6 billion, or over 25%, of all SSL-based attacks Zscaler blocked this year. Finance and insurance companies clocked in next with 1.2 billion or 18% of attacks blocked, and manufacturing organizations were the third-most targeted, with some 1.1 billion attacks directed against them. Deepen Desai, CISO and vice president of security research at Zscaler, says the trend shows why security groups need to be wary about encrypted traffic traversing their networks. While many organizations routinely encrypt traffic as part of their security best practices, fewer are inspecting it for threats, he says. "Most people assume that encrypted traffic means safe traffic, but that is unfortunately not the case," Desai says. "That false sense of security can create risk when organizations allow encrypted traffic to go uninspected."


Shadow IT: The Risks and Benefits That Come With It

Covid-19-induced acceleration of remote work has led to employees being somewhat lax about cybersecurity. Shadow IT might make business operations easier – and many companies certainly have been needing that in the last few months – but from the cybersecurity point of view, it also brings about more risks. If your IT team doesn’t know about an app or a cloud system that you’re using in your work, they can’t be responsible for any consequences of such usage. This includes those impacting the infrastructure of the entire organization. The responsibility falls on you to ensure the security of your company’s data whilst using the shadow IT app. Otherwise, your entire organization is at risk. It’s also easy to lose your data if your Shadow IT systems don’t back stuff up. If they’re your only method of storage and something goes wrong, you could potentially lose all your valuable data. If you work in government, healthcare, banking, or another heavily regulated center, chances are that you have local normative acts regulating your IT usage. It’s likely that your internal systems wouldn’t even allow you to access certain websites or apps. 


Refactoring Java, Part 2: Stabilizing your legacy code and technical debt

Technical debt is code with problems that can be improved with refactoring. The technical debt metaphor is that it’s like monetary debt. When you borrow money to purchase something, you must pay back more money than you borrowed; that is, you pay back the original sum and interest. When someone writes low-quality code or writes code without first writing automated tests, the organization incurs technical debt, and someone has to pay interest, at some point, for the debt that’s due. The organization’s interest payments aren’t necessarily in money. The biggest cost is the loss of technical agility, since you can’t update or otherwise change the behavior of the software as quickly as needed. And less technical agility means the organization has less business agility: The organization can’t meet stakeholders’ needs at the desired speed. Therefore, the goal is to refactor debt-ridden code. You’re taking the time to fix the code to improve technical and business agility. Now let’s start playing with the Gilded Rose kata’s code and see how to stabilize the code, while preparing to add functionality quickly in an agile way. One huge main problem with legacy code is that someone else wrote it. 


Interactive Imaging Technologies in the Wolfram Mathematica

A lot of mathematical problems that can be solved using computer algebra systems are constantly expanding. Considerable efforts of researchers are directed to the development of algorithms for calculating topological invariants of manifolds, knots, calculating topological invariants of manifolds of knots of algebraic curves, cohomology of various mathematical objects, arithmetic invariants of rings of integer elements in fields of algebraic numbers. Another example of ​​modern research is quantum algorithms, which sometimes have polynomial complexity, while existing classical algorithms have exponential complexity. Computer algebra is represented by theory, technology, software. The applied results include the developed algorithms and software for solving problems using a computer, in which the initial data and results are in the form of mathematical expressions, formulas. The main product of computer algebra has become computer algebra software systems. There are a lot of systems in this category, many publications are devoted to them, systematic updates are published with the presentation of the capabilities of new versions.


EU to introduce data-sharing measures with US in weeks

Companies will be able to use the assessment to decide whether they want to use a data transfer mechanism, and whether they need to introduce additional safeguards, such as encryption, to mitigate any data protection risks, said Gencarelli. The EC is expected to offer companies “non-exhaustive” and “non-prescriptive” guidance on the factors they should take into account. This includes the security of computer systems used, whether data is encrypted and how organisations will respond to requests from the US or other government law enforcement agencies for access to personal data on EU citizens. Gencarelli said relevant questions would include: What do you do as a company when you receive an access request? How do you review it? When do you challenge it – if, of course, you have grounds to challenge it? Companies may also need to assess whether they can use data minimisation principles to ensure that any data on EU citizens they hand over in response to a legitimate request by a government is compliant with EU privacy principles. The guidelines, which will be open for public consultation, will draw on the experience of companies that have developed best practices for SCCs and of civil society organisations.


Unlock the Power of Omnichannel Retail at the Edge

The Edge exists wherever the digital world and physical world intersect, and data is securely collected, generated, and processed to create new value. According to Gartner, by 2025, 75 percent6 of data will be processed at the Edge. For retailers, Edge technology means real-time data collection, analytics and automated responses where they matter most — on the shop floor, be that physical or virtual. And for today’s retailers, it’s what happens when Edge computing is combined with Computer Vision and AI that is most powerful and exciting, as it creates the many opportunities of omnichannel shopping. With Computer Vision, retailers enter a world of powerful sensor-enabled cameras that can see much more than the human eye. Combined with Edge analytics and AI, Computer Vision can enable retailers to monitor, interpret, and act in real-time across all areas of the retail environment. This type of vision has obvious implications for security, but for retailers it also opens up huge possibilities in understanding shopping behavior and implementing rapid responses. For example, understanding how customers flow through the store, and at what times of the day, can allow the retailer to put more important items directly in their paths to be more visible. 


4 Methods to Scale Automation Effectively

An essential element of the automation toolkit is the value-determination framework, which guides the identification and prioritization of automation opportunity decisions. However, many frameworks apply such a heavy weighting to cost reduction that other value dimensions are rendered meaningless. Evaluate impacts beyond savings to capture other manifestations of value; this will expand the universe of automation opportunities and appeal to more potential internal consumers. Benefits such as improving quality, reducing errors, enhancing speed of execution, liberating capacity to work on more strategic efforts, and enabling scalability should be appropriately considered, incorporated, and weighted in your prioritization framework. Keep in mind that where automation drives the greatest value changes over time depending on both evolving organizational priorities and how extensive the reach of the automation program has been. Periodically reevaluate the value dimensions of your framework and their relative weightings to determine whether any changes are merited. Typically, nascent automation programs take an “inside-out” approach to developing capability, where the COE is established first and federation is built over time as ownership and participation extends radially out to business functions and/or IT. 


Digital transformation: 5 ways to balance creativity and productivity

One of the biggest challenges is how to ensure that creative thinking is an integral part of your program planning and development. Creativity is fueled by knowledge and experience. It’s therefore important to make time for learning, whether that’s through research, reading the latest trade publication, listening to a podcast, attending a (virtual) event, or networking with colleagues. It’s all too easy to dismiss this as a distraction and to think “I haven’t got time for that” because you can’t see an immediate output. But making time to expand your horizons will do wonders for your creative thinking. ... However, the one thing we initially struggled with was how to keep being innovative. We were used to being together in the same room, bouncing ideas off one another, and brainstorms via video call just didn’t have the same impact. However, by applying some simple techniques such as interactive whiteboards and prototyping through demos on video platforms, we’ve managed to restore our creative energy. To make it through the pandemic, companies have had to think outside the box, either by looking at alternative revenue streams or adapting their existing business model. Businesses have proved their ability to make decisions, diversify at speed, and be innovative. 


Google Open-Sources Fast Attention Module Performer

The Transformer neural-network architecture is a common choice for sequence learning, especially in the natural-language processing (NLP) domain. It has several advantages over previous architectures, such as recurrent neural-networks (RNN); in particular, the self-attention mechanism that allows the network to "remember" previous items in the sequence can be executed in parallel on the entire sequence, which speeds up training and inference. However, since self-attention can link each item in the sequence to every other item, the computational and memory complexity of self-attention is O(N2)O(N2), where N is the maximum sequence length that can be processed. This puts a practical limit on sequence length of around 1,024 items, due to the memory constraints of GPUs. The original Transformer attention mechanism is implemented by a matrix of size NxN, followed by a softmax operation; the rows and columns represent queries and keys, respectively. The attention matrix is multiplied by the input sequence to output a set of similarity values. Performer's FAVOR+ algorithm decomposes the matrix into two matrices which contain "random features": random non-linear functions of the queries and keys. 



Quote for the day:

"Don't let your future successes be prisoners of your past failure, shape the future you want." -- Gordon Tredgold

Daily Tech Digest - November 10, 2020

CIOs to Hit the Gas on Digital Business in 2021

"We have to go into the year 2021 absolutely hating the word average," Lovelock said. "As soon as you say 'the average is', the only thing you are going to know for sure is that absolutely nobody is going to do that." Some CIOs spent on devices to get their workforces equipped to work from home. Others didn't. That's because executives looking to preserve cash in a crisis cut back where they could, according to Lovelock. "In 2020 devices is one of those first areas where you can save cash," he said. "When CIOs are faced with cash flow restrictions like they were in March and April, the first thing you save on or the first thing you defer is that deferable spending. That's mobile phones, laptops, all those hard things you can buy and pay cash up front form. You can sweat these assets." Meanwhile, categories that saw huge growth included desktop as a service and cloud-based video conferencing, according to Lovelock. These extremes in spending are part of what makes the 2020 recession different from the Great Recession of 2009 and 2010. That earlier economic downturn hit everyone across the board. "The decline in IT spending was more evenly spread," Lovelock said.


Microsoft adds a new Linux: CBL-Mariner

Investing in a lightweight Linux such as CBL-Mariner makes a lot of sense, considering Microsoft’s investments in container-based technologies. Cloud economics require hosts to use as few resources as possible, allowing services such as Azure to get a high utilization. At the same time, Kubernetes containers need as little overhead as possible, allowing as many nodes per pod as possible, and allowing new nodes to be launched as quickly as feasible. The same is true of edge hardware, especially the next generation of edge nodes intended for use with 5G networks. Here, like the public cloud, workloads are what’s most important, shifting them and data closer to users. Microsoft uses its growing estate of edge hardware as part of the Azure Content Delivery Network outside its main Azure data centers, caching content from Azure Web apps and from hosted video and file servers, with the aim of reducing latency where possible. The Azure CDN is a key component of its Jamstack-based Azure Static Websites service, hosting pages and JavaScript once published from GitHub. In the past Red Hat’s CoreOS used to be the preferred host of Linux containers, but its recent deprecation means that it’s no longer supported. Anyone using it has had to find an alternative. 


The future of programming languages: What to expect in this new Infrastructure as Code world

While we still use COBOL and other older programming languages, we also keep inventing new languages, each with its own advantages and disadvantages. For example, we have Rust and C++ for low-level, performance-sensitive systems programming (with Rust adding the benefit of safety); Python and R for machine learning, data manipulation, and more; and so on. Different tools for different needs. But as we move into this Everything-as-Code world, why can't we just keep using the same programming languages? After all, wouldn't it be better to use the Ruby you know (with all its built-in tooling) rather than starting from scratch? The answer is "no," as Graham Neray, cofounder and CEO of oso, told me. Why? Because there is often a "mismatch between the language and the purpose." These general-purpose, imperative languages "were designed for people to build apps and scripts from the ground up, as opposed to defining configurations, policies, etc." Further, mixing declarative tools with an imperative language doesn't make things any easier to debug. Consider Pulumi, which bills itself as an "open source infrastructure-as-code SDK [that] enables you to create, deploy, and manage infrastructure on any cloud, using your favorite languages." Sounds awesome, right?


Did Dremio Just Make Data Warehouses Obsolete?

The first new thing was caching data in the Apache Arrow format. The company employs the creators of Arrow, the in-memory data format, and it uses Arrow for in the computation engine. But Dremio was not using Arrow to accelerate queries. Instead, it used the Apache Parquet file format to build caches. However, because it’s an on-disk format, Parquet is much slower than Arrow. ... The second new thing that Dremio had to build was scale-out query planning. This advance enabled the massive concurrency that the biggest enterprise BI shops demand of their data warehouses. “Traditionally in the world of big data, people had lots of nodes to support big data sets, but they didn’t have lots of nodes to support concurrency,” Shiran says. “We now scale out our query planning and execution separately.”By enabling an arbitrary number of query planning coordinators in the Dremio cluster to go along with an arbitrary number of query executors, the software can now support deployments involving thousands of concurrent users. The third new element Dremio is bringing to the data lake is runtime filtering. By being smart about what database tables queries actually end up accessing during the course of execution, Dremio can eliminate the need to perform massive table scans on data that has no bearing on the results of the query.


What’s stopping job seekers from considering a career in cybersecurity?

The good news is that 71% of participants said that they view cybersecurity professionals as smart, technically skilled individuals, 51% view them as “good guys fighting cybercrime,” and 35% said cybersecurity professionals “keep us safe, like police and firefighters.” The bad news is that even though most view cybersecurity as a good career path, they don’t think it’s the right path for them. In fact, only 8% of respondents have considered working in the field at some point. “One of the most unexpected findings in the study is that respondents from the youngest generation of workers – Generation Z (Zoomers), which consist of those up to age 24 – have a less positive perception of cybersecurity professionals than any other generation surveyed. This issue in particular merits close attention by the cybersecurity industry at a time when employers are struggling to overcome the talent gap,” (ISC)² noted. The analysts posited that Generation Z’s perceptions of the cybersecurity field are shaped negatively by social media exposure, as social media platforms “tend to focus on the negative – arguments and venting.”


A Progressive Approach To Data Governance

For businesses, progressive data governance encourages fluid implementation using scalable tools and programs. The first step is to identify both a dataset and the relevant function. Using the same example as before, this could be the data in a reporting system the accounts department uses. That data could then be used during data literacy training hosted by a data governance software tool. Sticking with data literacy, after establishing one use case, an organization may decide to progress by expanding existing programs to other departments and then moving on to another function of data governance, such as identifying the roles and responsibilities of various data users or developing an internal compliance program. Businesses can scale the scope of the data they include in a governance program gradually, which gives them the chance to learn important lessons along the way. As an organization grows in confidence, it may widen its data scope and source it from other departments and locations. Progressive data governance can be described as a three-step process that incorporates the three C’s: catalog, collaborate and comply. Cataloging data assets makes data discoverable. 


The 4 Stages of Data Sophistication

As you start to rely on more data sources, and more frequently need to blend your data, you’ll want to build out a Data Lake—a spot for all of your data to exist together in a unified, performant source. Especially when you need to work with data from applications like Salesforce, Hubspot, Jira, and Zendesk, you’ll want to create a single home for this data so you can access all of it together and with a single SQL syntax, rather than many different APIs. ...  In the Lake stage, as you bring in more people to work with the data, you have to explain to them the oddities of each schema, what data is where, and what special criteria you need to filter by in each of the tables to get the proper results. This becomes a lot of work, and will leave you frequently fighting integrity issues. Eventually, you’ll want to start cleaning your data into a single, clean source of truth. ... When you have clean data and a good BI product on top of it, you should start noticing that many people within your company are able to answer their own questions, and more and more people are getting involved. This is great news: your company is getting increasingly informed, and the business and productivity results should be showing. 


Tales & Tips from the Trenches: In Defense of the Data Catalog

Most tools’ catalog interfaces provide many helpful features that together provide the context behind the data. The interface has many visual features that are certainly not vintage 1980’s. For example, many data catalog products have data quality metrics built in, which show dashboards of an asset’s quality on many of the “data quality dimensions.” These dashboards can be visible to the user and can help them determine if the data is suitable for their purposes. ... Data lineage is an extremely important feature of data catalogs; the products vary in how they perform it and how deep the lineage goes. One of my government sponsors felt data lineage was critical to their understanding, especially the visual depiction of the lineage. The data catalog’s data lineage diagrams tell the whole “back story” of the data: where it comes from, where it’s going, how “good” it is (based on whatever quality metrics are relevant), and some products even show the level of protection in the lineage diagram. The interface is important because it displays a visual diagram of the data flow along with descriptive metadata. See Figure 2 from Informatica which shows column-to-column mappings as data flows from one system to another, from source to warehouse or data lake. Notice that the actual transformations can also be shown for a given column.


A Seven-Step Guide to API-First Integration

This approach drastically reduces project delays and cost overruns due to miscommunication between frontend and backend teams leading to changes in APIs and backend systems. After designing the APIs, it can take some time to get the live backend systems up and running for the frontend teams to make API calls and test the system. To overcome this issue, frontend teams can set up dummy services, called mock backends, that mimic the designed APIs and return dummy data. You can read more about it in this API mocking guide. There can be instances where the requirements are vague or the development teams aren’t sure about the right approach to design the APIs upfront. In that case, we can design the API for a reduced scope and then implement it. We can do this for several iterations, using multiple sprints until the required scope is implemented. This way, we can identify a design flaw at an earlier stage and minimize the impact on project timelines. ... In software engineering, the façade design pattern is used to provide a more user-friendly interface for its users, hiding the complexity of a system. The idea behind the API façade is also the same; it provides a simplified API of its complex backend systems to the application programmers. 


Fintechs: transforming customer expectations and disrupting finance

With favourable tech regulation, massive mobile adoption, and shifting expectations across the demographics, digital challengers are well positioned to advance and evolve the personalised services they offer. Fintechs have the advantage of starting from scratch, without having to build on legacy IT infrastructure bureaucratic decision-making processes. They are lean and innovative, led by entrepreneurs on a mission to change the world. Using the latest technologies such as Artificial Intelligence (AI), Blockchain, Biometrics Security and Cloud, the processes, compliance requirements, policies and technology differ from conventional banks, providing lower operating and resource costs. With these foundations, they are well-positioned to pursue a highly customer-centric approach and rapid product innovation. By contrast, for traditional banks it can be an arduous task to innovate and reinvent. They are highly bureaucratic and slow-moving, with high-cost structures and substantial legacy tech. These characteristics prevent them from flexibly adapting to fast-changing consumer expectations. Service providers unable to live up to the expectations of best-in-class digital experiences will see high switching rates. As a result, providers are actively investing in initiatives that boost customer experience in a bid to increase long-term customer retention.



Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill

Daily Tech Digest - November 09, 2020

How to get the most out of intelligent automation

Getting a digital workforce up-and-running may prove difficult without buy-in from senior IT personnel, so ensure that their early support is gained. IT must also be involved from the start to support on many critical fronts, such as compliance with IT security, building and managing the underlying infrastructure and accessing target applications. Although digital workers are trained, governed and run by the business, not getting IT engagement is one of the fastest ways to curtail an automation program. In fact, working closely with IT must be an ongoing activity to ensure that the digital workforce operates optimally with minimal operational issues. For digital worker campaigns to be sustainable, they must also receive buy-in from senior executives who will sponsor and champion it. If they see it as a strategic business project, they’ll help provide the requisite financial and human resources and help remove any obstacles along the way. It’s important to plan where digital workers sit within the business so they most effectively scale as automation demand increases. While an organizational structure provides the foundation for standard operating procedures, selecting the right design for a digital workforce program is essential and should be based on key information, not gut feel.


Diversification and vision drive success in a crisis

What we learned the hard way is that some of our businesses are pretty dependent on supplies from a single country. For example, we get nearly all the vitamins for certain animal feed products from India. We knew this, but when you’re confronted immediately with a complete lockdown in India, you’re faced with the reality. A border closing is where the biggest risk is. We quickly started to find other sources to make sure we could continue supplying customers, and now we’re not dependent anymore on one location for the majority of our supplies. There were some scary moments that had more to do with closure of borders, but in the end, I don’t think the supply lines were that much disrupted. Safety stocks are now being held everywhere in the organization, which I understand on the one hand, because if I were in management, I would probably do the same. On the other hand, [doing so] eats into your liquidity because you’re building up a huge amount of working capital. We’re now focused on bringing that down again. An unrelated topic is Brexit, but that also doesn’t help, because in that area we need to prepare too [for the change in trade regulations]. I think there is a growing realization that [relying on] 100 percent just-in-time deliveries around the world might not be the best model.


EU moves closer to encryption ban after Austria, France attacks

The document states: “Law enforcement is increasingly dependent on access to electronic evidence to effectively fight terrorism, organised crime, child sexual abuse (particularly its online aspects), as well as a variety of cyber-enabled crimes. For competent authorities, access to electronic evidence is not only essential to conduct successful investigations and thereby bring criminals to justice, but also to protect victims and help ensure security. “The principle of security through encryption and security despite encryption must be upheld in its entirety. The European Union continues to support strong encryption. Encryption is an anchor of confidence in digitisation and in protection of fundamental rights and should be promoted and developed. “Protecting the privacy and security of communications through encryption and at the same time upholding the possibility for competent authorities in the area of security and criminal justice to lawfully access relevant data for legitimate, clearly defined purposes in fighting serious and/or organised crimes and terrorism, including in the digital world, are extremely important. Any actions taken have to balance these interests carefully.”


Why 90 percent of all machine learning models never make it into production

Companies aren’t bad at collecting data. However, many companies are highly siloed, which means that each department has its own ways of collecting data, preferred formats, places where they store it, and security and privacy preferences. Data scientists, on the other hand, often need data from several departments. Siloing makes it harder to clean and process that data. Moreover, many data scientists complain that they can’t even obtain the data they need. But how should you even start training a model if you don’t have the necessary data? Siloed company structures — and inaccessible data — might have been manageable in the past. But in an era where technological transformation is happening at breakneck speed, companies will need to step up and set up uniform data structures throughout. ... In addition, engineering isn’t always deemed essential for data scientists. This is a problem because engineers might not always understand all the details of what a data scientist envisions, or might implement things differently due to miscommunication. Therefore, data scientists who can deploy their models have a competitive edge over those who can’t, as StackOverflow points out.


What Quantum Computing Could Mean for Software Development

As speculative as quantum software development sounds, it is not an entirely alien concept. There is a broad class of quantum algorithms, said Yudong Cao, founder and CTO for startup Zapata Computing, that share similar features as machine learning models. “If you look at MLOps or AIOps, this is very much the sort of software engineering challenge [in quantum software] that people also face with AI.” He leads an effort at Zapata to provide software that might help industrial players explore possibilities of quantum computing. Cao said when he started in the field, quantum computing was still largely an academic discipline with theoretical works that might predict what could be done with a quantum computer, as well as experimental works that demonstrate what could be done. “Today we’re seeing this gap become narrower and narrower,” he said. “On the theory side, we’re improving our algorithms to reduce the amount of resources. On the other side, new hardware is coming online.” There is a frontier emerging for quantum computing thanks to software solutions and hardware maturing, but Cao said the confusing ecosystem needs to be sorted out. “What is needed is a set of tools that allow people to tap into this diverse landscape effectively,” he said. 


Companies gearing up for 5G era of industry digitisation

For Smart Cities, there will be considerably more ways to manage the community life in near real-time like enhanced security and mobility, measure pollution, optimise energy consumption, improve waste management, etc. The list is virtually endless. Although existing 4G networks have already been deployed in smart cities around the world, they are limited by the number of connections they can support, the data they can transmit, and most importantly the speed they can offer, all of which create hurdles in deployment of smart cities use cases. 5G is expected to overcome these hurdles, and allow large number of connections, providing super-high bandwidth, and ultra-low latency based communications, to build a connected city – a smarter city. 5G will provide the infrastructure to roll out these innovations that appear promising today. Therefore, 5G opens up a whole world of new possibilities. Until the fourth generation network (4G), consumers were the first to benefit from each new generation of wireless technology. However, the main interest of 5G lies in business-to-business markets. This next generation of mobile telecommunications will become the backbone of industrial operations in the broadest sense.


When It Comes to Culture, Does Your Company Walk the Talk?

To address whether stated values shape employee behavior, we first measured what companies say they value. The simplest way to quantify corporate culture would be to treat each value as binary — a company either listed it as a core value or did not. When Charles Schwab lists innovation as one of four core values, it is presumably more focused on it than Quicken Loans, which includes innovation among a laundry list of 19 elements of its culture. To quantify each company’s relative focus on a value, we weighted it by the inverse of the total number of values listed. So innovation was weighted at 25% for Charles Schwab and 5% for Quicken Loans. (A company that didn’t list a specific value received a weighting of zero for that value.) To control for differences across sectors, we assigned each company to one of 33 industries. We then ranked each company in its industry based on the weighting for each value we measured. To assess how well companies live up to their stated values, we used data from the 2019 Culture 500, which ranks companies on nine of the most commonly cited values. Every Culture 500 company received a sentiment score that measured how positively employees talked about a specific value in the free text of their Glassdoor reviews.


10 ways the pandemic affected cloud investment

One initiative that emerged as being one of the most important for the enterprise was remote resiliency, which was put to the ultimate test in March, when the organizations that had the ability to go remote, did. The pressure was on for IT departments who had to rally to be sure telecommuters had access to the company online, that their personal devices were safe, and keep vigilant watch, as hackers took advantage of the COVID-19 crisis' imposed mobilization of the industry. As coronavirus cases and the resultant deaths rise, the burden of investing in the cloud grows exponentially. A new report from SPR, "Cloud Investment in the Age of Accelerated Evolution" offers research culled from an investigation into the state of enterprise cloud adoption, in what it calls "this uniquely transformative moment."  The report cites 10 key findings, which include a reveal of the biggest barrier to effective cloud security: The organization's budget. But in the next 12 months, 41% of IT decision-makers said they plan to increase security budgets. The engagement with the cloud had been enthusiastic, but COVID-19 sent the innovations into fast forward, which meant that the enterprise was able to assess use quicker, but since it was a digital transformation that evolved from urgent need rather than careful observation and experience, it's now time to make the cloud more efficient.


Stargate: A new way to think about databases

DataStax is getting there by open sourcing the Cassandra coordinator code. DataStax started with Cassandra for obvious reasons: The company knows the database well and Cassandra is a great option for handling distributed data requests. But it’s that coordinator code that is the heart of Stargate, as Gosnell explained. The hardest aspect of the logic between a customer’s API and their back end is the distributed request coordination, i.e., ensuring proper load balancing, directing database requests to the right place, etc. This is what Cassandra’s coordinator code does well. The company wants more developers to “join our community and help us prioritize which features we need next” in Stargate, Gosnell stressed. It’s a great story, one that helps DataStax, of course, but also has the potential to be useful for other vendors and with other databases. And that is where Stargate could go from an interesting, single-vendor project into something much more noteworthy. Consider Kubernetes, for example. It was cool technology when its Borg ancestor ran exclusively within Google, and it remained cool as an open source project to which Google employees, almost exclusively, contributed.


Where Does Data Governance Fit into a Data Strategy

The data of the organization is a valuable tool to enable improvements in product development, customer interaction and satisfaction, quality improvement and impacts the organization’s bottom line in terms of decision-making, and improvements in quality, efficiency and effectiveness. Everybody, at all levels, must recognize that your organization’s data governance program will enable you to strategically manage data as an asset to achieve accurate, trusted, and secure data that delivers business intelligence focused on leveraging and building a competitive advantage. The impact on the data of the organization will be significant. Employees and partners will benefit from everything stated as the purpose of the data governance program (in the previous answer). The impact on individuals roles will depend on people’s present relationship to the data, however, the intent at your organization is to take a less pervasive approach that aims to minimize disruption to normal business activities. Individuals that define data will become educated on the aspects of data definition that drive improvements to the organization’s confidence in the data they use. Individuals that produce data will become educated in quality data production including timeliness, accuracy, completeness and relevance. 



Quote for the day:

"A real friend is one who walks in when the rest of the world walks out." -- Walter Winchell

Daily Tech Digest - November 08, 2020

How Emerging Demands Of AI-Powered Solutions Help Gain Momentum Of Businesses

AI helps take the BI game leaps and bounds ahead with machine learning and deep learning. It empowers BI with the ability to analyze data coming from multiple sources, learn from this data in real-time, and provide accurate granular predictive insights for faster business growth. AI always stays one step ahead of humans in terms of analyzing large data sets at scale with speed and accuracy. The influence of AI is simply not limited to analytics but also to data engineering. Data coming from multiple structured, unstructured, and semi-structured sources, needs to be transformed from silos to unified data. AI can accelerate and automate this process creating a single view and saving data analyst's time and providing much-needed independence for business users. AI-powered NLP bots take BI altogether to the next level by enabling users to extract insights via voice or chat using any language. For example, these BI bots can easily answer questions like 'What is the sales forecast for the next two quarters?' With this, business users can skip any complex query and leave it up to the bots to process the analysis.


How to deal with the escalating phishing threat

“Working from home, where there are more distractions, makes it even less likely that people really pay attention to these trainings. That’s why it’s not uncommon to see the same people who tune out training falling for scams again and again,” he noted. That’s why defenders must preempt attacks, he says, and reinforce a lesson during a live attack. When something gets through and someone clicks on a malicious URL, defenders must be able to simultaneously block the attack and show the victim what the phisher was attempting to do. Harr, who has over 20 years of experience as a senior executive and GM at industry leading security and storage companies and as a serial entrepreneur and CEO at multiple successful start-ups, is now leading SlashNext, a cybersecurity startup that uses AI to predict and protect enterprise users from phishing threats. He says that most CISOs assume phishing is a corporate email problem and their current line of defense is adequate, but they are wrong. “We are detecting 21,000 new phishing attacks a day, many of which have moved beyond corporate email and simple credential stealing. These attacks can easily evade email phishing defenses that rely on static, reputation-based detection. 


The cryptocurrency sector is overflowing with dead projects

It’s a good question if the world really needs a blockchain-based information and trading platform for the pet market. I wouldn’t say there are many problems with over-centralization there. Pet shops are usually chosen by customers after analyzing brand reputation and online presence. Some problems that customers on this market may face include unreliable information about the acquired animal’s health or previous owners. However, these difficulties comprise not a technical, but a legal problem that is unlikely to be solved using blockchain technology. Moreover, since animal welfare laws vary between different countries, creating a unified international platform in this field is a legally challenging task, hardly suitable for a small technological startup. The Petchain project team consisted mainly of no-names who had no proven experience in any serious projects. It was not even possible to say for sure whether these were real people — some of the project advisors turned out to have been presented with fake photos. Despite some marketing efforts, no serious funding was attracted to the project. At the moment, the official website of the project is inactive and its social media accounts haven’t been updated for more than a year.


The Road to MicroProfile 4.0

The primary driver behind creating the MicroProfile Working Group is to close intellectual property gaps identified by the Eclipse Foundation for specification projects. So, there are more legal protections in place now that MicroProfile is a Working Group. A Working Group also places more processes on MicroProfile. Historically, MicroProfile moved quickly with minimal process and late-binding decisions. It was quite an agile project that delivered specifications at quite a quick pace. However, I personally feel like we were reaching a point where adding *some* process can benefit the project. For instance, we now have to put more thought and formality up-front into planning a specification, which requires a Steering Committee vote. Better planning gives implementors, tool vendors, and the community more up-front visibility into what is coming and prepare. However, we codified "limited processes" in the MicroProfile Charter to keep processes to a minimum. ... A big challenge was switching from being a fast-moving agile project to fitting into the process structure required by a Working Group. We wanted to maintain as much of our existing culture as possible because the community was consistently delivering three annual releases.


Blockchain adoption 2021: Going mainstream through enterprise use

According to Bennet, many of the blockchain-based systems that are live today share a common factor: less time involved to resolve discrepancies. In some cases, this could even be instant. Bennet noted this common factor applies to supply chain use cases as well as in financial services: “It’s not just about needing fewer people to accomplish certain tasks; it’s also about shortening elapsed time and freeing up liquidity. A key point is that it’s possible to make it happen today, in the context of existing processes and operating models.” While this may be, Bennet shared that the more long-term strategic projects in financial services tend to revolve around potential changes in market structure and operating models. Many of these cases also require regulatory adjustments. “This takes time, resource and effort. That’s the main reason why COVID-related volatility and uncertainty has led many banks to pull back from some of those more long-term DLT-related projects for the time being,” Bennet said. The report also states that almost all the initiatives set to go from pilot into production next year will run on enterprise blockchain platforms that utilize the cloud. These most likely will include solutions from Alibaba, Huawei, IBM, Microsoft, OneConnect and Oracle.


How open source makes me a better manager

As an open source enthusiast, it was easy for me to transition my management style to the open management philosophy, which fosters transparency, inclusivity, adaptability, and collaboration. As an open manager, one of my primary goals is to engage and empower associates to be their best. It is easy to adopt this philosophy when you understand the open source values. By being transparent, I help create the context for the team and the "why." This is a building block in creating trust. Being consciously inclusive is another value that I regard highly. Making sure everyone in the team is included and everyone's voice is heard is extremely important for individual and organizational growth. In an environment that is constantly evolving and where innovation is key, being nimble and adaptable is of utmost importance. Encouraging associates' growth mindset and continuous learning helps foster these traits. For effective collaboration, I believe we need an environment where there is trust, open communication, and respect. By paying attention to these values, an open manager can create an environment that is inclusive, treats others with respect, and encourages everyone to support each other.


Marriott Hit With $24 Million GDPR Privacy Fine Over Breach

One notable aspect about the fine imposed on Marriott is that it is just one-fifth of the fine that the ICO originally recommended in July 2019, which Marriott had contested. But the reduction is not nearly as big as with the final fine that the ICO recently imposed on British Airways, in connection with a 2018 data breach that exposed the personal information of about 430,000 customers, with 244,000 possibly having their names, addresses, payment card numbers and CVVs compromised. In its initial July 2019 penalty notice, the ICO had proposed fining BA a record £184 million ($238 million). But last month, the regulator issued a final fine of just £20 million ($26 million). Legal experts say the final fines being lower than the proposed penalties is not surprising. Indeed, the ICO earlier this year noted that because of the ongoing coronavirus outbreak, it planned to adjust its regulatory approach, not least because of the staffing and financial impact that COVID-19 was having on organizations. Under GDPR, after proposing a fine, regulators have 12 months to issue a final fine, unless it proposes delaying the imposition of the fine, and the organization that is being investigated agrees.


What Is The Value Proposition of Enterprise Architecture Today?

The first one of these is Strategy Advancement. This is basically concerned with how the business can achieve its target outcomes and also identifying the means to do so. So what are we trying to achieve – do we know, do we have doubts about that? If we’re sure our goals are solid, then how do we make them happen, how do we ensure that every investment, or strategic decision, or new business process we set up is inline and actively supports achieving those goals. EA connects all these concepts across the different enterprise domains beautifully and when done in a leading platform like HoriZZon, the quality of the business intelligence insights that can be produced and delivered to the relevant audiences, in order to make sure everyone’s eyes remain on the prize, is invaluable. So strategy advancement is key in ensuring coordinated change across the entire business. The second one of these areas is Risk Identification & Mitigation. Security and the risks to personal data have never been more relevant than now. This area of enterprise architecture’s value proposition deals with identifying the risks faced by the organization in a way that allows architects to engage in a meaningful conversation with stakeholders on the business side about how we can address these risks.


Three Intelligent Automation Capabilities to Look for When Evaluating RPA Tools

Making decisions based on rules only works when outcomes are predictable. What happens when outcomes are less certain and conditions more varied — conditions under which people have to make decisions all the time? For instance, how would a bot respond to a query like Is the supplier reliable? Choosing an answer like Extremely Reliable, Very Reliable, Sometimes Reliable, or Not Reliable requires an element of human reasoning. Bots can achieve this by applying an AI technique called fuzzy logic. Fuzzy logic uses mathematical models defined by the RPA developer to represent variations and uncertainty. For example, on a scale from 1 to 10, an Extremely Reliable supplier may be rated between 7 and 10, whereas a Very Reliable supplier may be rated between 6 and 8. The bot uses these mathematical models to convert precise input data into fuzzy input values. The bot then applies business rules defined by the RPA developer to the fuzzy input values. The mathematical model is then applied to the fuzzy output values to generate the result. ... As the amount of digital work increases, RPA solutions need scalability to provide greater performance capacity. Most RPA vendors solve this problem by enabling customers to add more bots to scale capacity horizontally. 


How Data Gravity Is Forcing a Shift to a Data-Centric Enterprise Architecture

The new demands brought on by AI and ML create new opportunities for data-centric architecture that supports businesses and their need to operate ubiquitously so they can meet customer expectations and make business decisions on-demand. It’s informed by real-time intelligence to power innovation and scale digital business. ... With a modernized infrastructure strategy, enterprises can support the influx of data from several users, locations, clouds, and networks and create centers of data exchange. Traffic can then be aggregated and maintained via public or private clouds, at the core or the edge, and from every point of business presence, helping lessen data gravity barriers and its effects. By implementing a secure, hybrid IT and data-centric architecture globally at key points of business presence, businesses can harness data to create centers of data exchange for better digital decision-making. Data gravity impacts businesses of all sizes, and every industry has unique requirements around addressing data gravity. In order for the industry to tackle the next era of compute, companies including data center, cloud, and HPC solution providers, are coming together to help mitigate the challenges associated with data gravity by creating an ecosystem of partners so that enterprises can solve their global coverage, capacity, and ecosystem connectivity.



Quote for the day:

"The greatest thing is, at any moment, to be willing to give up who we are in order to become all that we can be." -- Max de Pree

Daily Tech Digest - November 07, 2020

Why Culture Is the Greatest Barrier to Data Success

Achieving data success is a journey, not a sprint. Companies desire to accelerate their efforts to become data-driven, but consistency, patience, and steadfastness pay off in the long run. Companies that set a clear course, with reasonable expectations and phased results over a period of time, get to the destination faster. Develop a plan. Create a data strategy for your company if you do not already have one. If you do have a data strategy, make sure that it is updated annually to reflect changes in the business and the ongoing and rapid evolution of emerging data management capabilities. Define your future state, and build an execution road map that will take you from your current state to the target outcome. It is hard to reach any destination without a good road map. Companies need to maintain a long-term view and stick to it while making periodic adjustments. Patience, persistence, and commitment are the ingredients for ensuring a successful long-term outcome. Organizations must evolve and change the ways in which they structure current business processes if they expect to become more data-driven. In short, companies must be prepared to think differently.


Silver Peak SD-WAN Collects Aruba's ClearPass Treatment

According to Lunetta, ClearPass was a natural place to start the integration efforts. “Security has always been central to Aruba’s network solutions and is top of mind for every customer these days, especially with the increase of remote working and proliferation of IoT devices on the network,” he said. Aruba’s ClearPass offering was announced in April 2019, to help enterprises cope with the growing number of IoT and connected devices on the network. ClearPass device insights is a terminal that employs machine learning to automate the discovery and fingerprinting of connected devices. When paired with Aruba’s ClearPass Policy Manager, customers can dynamically segment security capabilities, making it possible to authenticate and enforce policies based on device type and the needs of the user. Silver Peak customers will be able to identify and block unauthorized users from access applicants or other services at the WAN edge long before they get to the cloud or private data center. “I think the biggest benefit will be adding more intelligence to the segmentation capabilities from Silver Peak,” said John Grady, network security analyst at ESG, in an email to SDxCentral. “By adding agentless device visibility and context, as well as the automation and policy control from ClearPass, SilverPeak becomes that much more attractive, especially relative to IoT.”


Ransomware Alert: Pay2Key

Over the past week, an exceptional number of Israeli companies reported ransomware attacks. While some of the attacks were carried out by known ransomware strands like REvil and Ryuk, several large corporations experienced a full blown attack with a previously unknown ransomware variant names Pay2Key. As days go by, more of the reported ransomware attacks turn out to be related to the new Pay2Key ransomware. The attacker followed the same procedure to gain a foothold, propagate and remotely control the infection within the compromised companies. The investigation so far indicates the attacker may have gained access to the organizations’ networks some time before the attack, but presented an ability to make a rapid move of spreading the ransomware within an hour to the entire network. After completing the infection phase, the victims received a customized ransom note, with a relatively low demand of 7-9 bitcoins (~$110K-$140K). The full scope of these attacks is still unraveling and is under investigation; but we at Check Point Research would like to offer our initial analysis of this new ransomware variant, as well as to provide relevant IOC’s to help mitigate possible ongoing attacks. ... Analyzing Pay2Key ransomware operation, we were unable to correlate it to any other existing ransomware strain, and it appears to be developed from scratch.


Blazor: Full stack C# and Microsoft's pitch for ASP.NET Web Form diehards

Blazor is not very like web forms but has some things in common. One is that developers can write C# everywhere, both on the server and for the browser client. Microsoft calls this “full stack C#”. “Blazor shares many commonalities with ASP.NET Web Forms, like having a reusable component model and a simple way to handle user events,” wrote the authors. The Blazor framework comes in several guises. The initial concept, and one of the options, is Blazor WebAssembly (Wasm). The .NET runtime is complied to Wasm, the application is compiled to a .NET DLL, and runs in the browser, supplemented by JavaScript interop. ... Blazor is designed for single-page applications and is reminiscent of Silverlight – Microsoft’s browser plugin in which ran .NET code in the browser - but with an HTML/CSS user interface. There are two other Blazor application models. Blazor Server runs on the server and supports a thin browser client communicating with WebSockets (ASP.NET SignalR). The programming model is the same, but it is a thin client approach which means faster loading and no WebAssembly required; it can even be persuaded to run in IE11.


What is data architecture? A framework for managing data

According to Data Management Book of Knowledge (DMBOK 2), data architecture defines the blueprint for managing data assets by aligning with organizational strategy to establish strategic data requirements and designs to meet those requirements. On the other hand, DMBOK 2 defines data modeling as, "the process of discovering, analyzing, representing, and communicating data requirements in a precise form called the data model." While both data architecture and data modeling seek to bridge the gap between business goals and technology, data architecture is about the macro view that seeks to understand and support the relationships between an organization's functions, technology, and data types. Data modeling takes a more focused view of specific systems or business cases. There are several enterprise architecture frameworks that commonly serve as the foundation for building an organization's data architecture framework. DAMA International's Data Management Body of Knowledge is a framework specifically for data management. It provides standard definitions for data management functions, deliverables, roles, and other terminology, and presents guiding principles for data management.


Using machine learning to track the pandemic’s impact on mental health

Using several types of natural language processing algorithms, the researchers measured the frequency of words associated with topics such as anxiety, death, isolation, and substance abuse, and grouped posts together based on similarities in the language used. These approaches allowed the researchers to identify similarities between each group’s posts after the onset of the pandemic, as well as distinctive differences between groups. The researchers found that while people in most of the support groups began posting about Covid-19 in March, the group devoted to health anxiety started much earlier, in January. However, as the pandemic progressed, the other mental health groups began to closely resemble the health anxiety group, in terms of the language that was most often used. At the same time, the group devoted to personal finance showed the most negative semantic change from January to April 2020, and significantly increased the use of words related to economic stress and negative sentiment. They also discovered that the mental health groups affected the most negatively early in the pandemic were those related to ADHD and eating disorders.


‘Digital Mercenaries’: Why Blockchain Analytics Firms Have Privacy Advocates Worried

Gladstein and other advocates see this sort of blockchain analysis as an extension of governmental surveillance, along the lines of when the National Security Agency (NSA) was secretly gathering extensive metadata on the American public, not to mention the agency’s work abroad. Gladstein argues that when it comes to payment processors like Square and even exchanges, they can make a case they work hard to protect customer privacy. But if you start a blockchain surveillance company (as companies such as Chainalysis, CipherTrace and Elliptic have done), that’s not a defense because the explicit purpose of the company is to participate in the de-anonymization process.  De-anonymization is a process that has different components, one being the use of the blockchain to trace where funds go.  “Natively speaking, Bitcoin is very privacy-protecting because it’s not linked to your identity or your home address or your credit card history,” said Gladstein. “It’s just a freaking random address, right? And the coins are moved from one address to another. To pair these to a person and destroy their privacy requires intentional or unintentional doxxing.”


Kubernetes Security Best Practices to Keep You out of the News

Building secure containers requires scanning them for vulnerabilities — including Linux system packages, as well as application packages for dynamic languages like Python or Ruby. App developers might be accustomed to scanning application dependencies, but now that they are shipping an entire operating system with their app, they have to be supported in securing the OS as well. To support this effort at scale, consider using a tool like Cloud Native Buildpacks, which allows a platform or ops team to make standardized container builds that developers can use to drop their application into — completely replacing the Dockerfile for a project. These centralized builds can be kept up-to-date so that developers can focus on what they’re good at rather than having to be jacks-of-all-DevOps-trades. Container image scanning tools scan the layers of a built image for known vulnerabilities, and are indispensable in keeping your builds and dependencies up-to-date. They can be run during development and in CI pipelines to shift security practices left, giving developers the earliest notice of a vulnerability. The best practice is to strip your container down to the minimum needed to run the application. A great way to ruin an attacker’s day is to have a container with no shell!


Gitpaste-12 Worm Targets Linux Servers, IoT Devices

This script sets up a cron job it downloads from Pastebin. A cron job is a time-based job scheduler in Unix-like computer operating systems. The cron job calls a script and executes it again each minute; researchers believe that this script is presumably one mechanism by which updates can be pushed to the botnet. It then downloads a script from GitHub (https://raw[.]githubusercontent[.]com/cnmnmsl-001/-/master/shadu1) and executes it. The script contains comments in the Chinese language and has multiple commands available to attackers to disable different security capabilities. These include stripping the system’s defenses, including firewall rules, selinux (a security architecture for LinuxR systems), apparmor (a Linux kernel security module that allows the system administrator to restrict programs’ capabilities), as well as common attack prevention and monitoring software. The malware also has some commands that disable cloud security agents, “which clearly indicates the threat actor intends to target public cloud computing infrastructure provided by Alibaba Cloud and Tencent,” said researchers. Gitpaste-12 also features commands allowing it to run a cryptominer that targets the Monero cryptocurrency.


Data Strategies for Efficient and Secure Edge Computing Services

There is a long list of design questions that comes with executing an IoT network: where does computation happen? Where and how do you store and encrypt data? Do you require encryption for data in motion or just at rest? How do you coordinate workflows across devices? And finally, how much does this cost? While this is an intimidating list, we can build good practices that have evolved both prior to the advent of IoT and more recently with the increasing use of edge computing. First, let’s take a look at computation and data storage. When possible, computation should happen close to the data. By minimizing transmission time, you reduce the overall latency for receiving results. Remember that distributing computation can increase overall system complexity, creating new vulnerabilities in various endpoints, so it’s important to keep it simple. One approach is to do minimal processing on IoT devices themselves. A data collection device may just need to package a payload of data, add routing and authentication to the payload, then send it to another device for further processing. There are some instances, however, where computing close to the collection site is necessary.



Quote for the day:

"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson

Daily Tech Digest - November 06, 2020

Applying particle physics methods to quantum computing

In quantum computing, which relies on quantum bits, or qubits, to carry information, the fragile state known as quantum superposition is difficult to maintain and can decay over time, causing a qubit to display a zero instead of a one—this is a common example of a readout error. Superposition provides that a quantum bit can represent a zero, a one, or both quantities at the same time. This enables unique computing capabilities not possible in conventional computing, which rely on bits representing either a one or a zero, but not both at once. Another source of readout error in quantum computers is simply a faulty measurement of a qubit's state due to the architecture of the computer. In the study, researchers simulated a quantum computer to compare the performance of three different error-correction (or error-mitigation or unfolding) techniques. They found that the IBU method is more robust in a very noisy, error-prone environment, and slightly outperformed the other two in the presence of more common noise patterns. Its performance was compared to an error-correction method called Ignis that is part of a collection of open-source quantum-computing software development tools developed for IBM's quantum computers, and a very basic form of unfolding known as the matrix inversion method.


Common Challenges Facing Angular Enterprises - Stephen Fluin at Ngconf

The top emerging concerns emerging from the conversations that Fluin had in the first trimester of this year are linked to user experience, micro front-ends, server-side rendering, monorepositories and code sharing, managing applications that are only partly Angular-based, and presenting a business case for the upgrade of Angular versions. A good user experience means fast initial load and seamless transitions. Fluin strongly recommended using the source-map-explorer npm package to monitor and analyze the composition of an Angular bundle: In enterprise conversations, this was actually identified as one of the most valuable things they had learned. Fluin also mentioned that simply by keeping up-to-date with the latest Angular versions, Angular developers will naturally benefit from smaller bundle sizes or an improved command-line interface implementing configurable optimization strategies (e.g., better bundling, server-side rendering). Fluin posited that seamless transitions between routes in Angular applications already was one of Angular’s strengths. Fluin then explained that the independent deployability characteristic of micro front-end may come into tension with the recommended use of monorepositories to address other issues such as testing, code sharing, or dependency management.


How Shell is fleshing out a digital-twin strategy

According to Shell, the deployment of the simulation technology will also enable safe asset life extension by replacing the over-conservative estimates made with conventional simulation software, with accurate assessments that reflect actual remaining fatigue life. Elohor Aiboni, asset manager for Bonga, said: “The Bonga Main FPSO heralded a number of innovative ‘firsts’ when it was built back in 2004, so it’s fitting that it is the first asset of its kind to deploy something as advanced as a structural digital twin. We are very excited about the new capabilities that Akselos brings and believe it will create a positive impact on the way we manage structural integrity. It is also a great example of digitisation coming to life.” In a recent blog post, Victor Voulgaropoulos, industry analyst at Verdantix wrote: “Shell is again in the spotlight, as it seeks to further accelerate its digital transformation initiatives by implementing digital-twin solutions across its global portfolio of assets and capital projects. Shell has signed an enterprise framework agreement with Kongsberg Digital, a Kongsberg subsidiary, for the deployment of Kongsberg’s Kognitwin Energy, a cloud-based software-as-a-service digital-twin solution, within Shell’s upstream, liquified natural gas, and downstream business lines.”


How COVID-19 Changed the VC Investment Landscape for Cybersecurity Companies

Businesses have faced the need to find new and inventive ways to survive the "new normal." For many companies, this means digitizing existing processes and relying heavily on cloud-based services to enable workers to access corporate networks from their homes. But this presents myriad new problems for businesses. While the pandemic provides vast opportunities for digital transformation, it unfortunately creates the perfect storm for data breaches and hackers, too. Social distancing restrictions have forced firms to abandon the protections in the office in favor of enabling employees to work from home, where they might not have the same robust levels of security. Of course, VCs have kept their ears to the ground and are looking to cybersecurity and artificial intelligence (AI) startups as a means to mitigate these new vulnerabilities. Cybersecurity spending is forecast to grow approximately 9% a year from 2021 to 2024, according to Gartner, as businesses invest more heavily in identifying and quickly responding to threats. While large corporations have traditionally been responsible for huge amounts of private data that make cybersecurity a priority, the new virtual backdrop across all industries means that businesses of all shapes and sizes are looking to build the capabilities and defenses needed to keep malicious actors at bay.


NHS warned over Ryuk spreading through Trickbot replacements

“In recent weeks, we assess with high confidence that BazarBackdoor has been Ryuk’s most predominant loader,” said the firm. “With lower confidence, we assess this wave of Ryuk activity may be, in part, in retaliation for September’s TrickBot disruptions.” Bazar’s components are most usually delivered in spear phishing campaigns operated via Sendgrid, a bona fide email marketing service. The emails contain links to Microsoft Office or Google Docs files, and the lure usually relates to a threat of employee termination or a debit payment. In turn, these emails link to the initial payload, a headless preliminary loader that ultimately downloads, unpacks and loads Bazar. The firm added that newer campaigns seem to forgo the spam distribution in favour of human-operated attacks against exposed admin interfaces or cloud services. Typically, once they have gained control of the target system using Bazar, Wizard Spider will download a post-exploitation toolkit, such as Cobalt Strike or Metasploit, to gather target information and enumerate the network, at which point they will harvest credentials to move into other systems and compromise the entire network – then they will deploy Ryuk ransomware. NHS Digital said current Bazar campaigns could accomplish this in under five hours.


Implementing a Staged Approach to Evolutionary Architecture

Traditionally, software architecture and design phases have been considered as initial phases. In this approach, the architecture decisions were considered valid for the entire life of the system. With the wisdom of ages and in reaction to industry transformations, we have started to see architecture as evolving. This evolution necessitates a different set of approaches in the direction of continuous planning, facilitating via continuous integration, dashboards, and tools, thus providing guide rails for systems to evolve. This article focuses on these approaches and tools to support the journey. We are in the midst of a rapidly changing environment. As Rebecca Parsons discussed in a presentation on evolutionary architecture, the changes span across business models, requirements, and customer expectations. The technology landscape also changes quite often. In a broader sense, the changes are happening at an unparalleled rate and impact on our environment. ... Smartphones reached major penetration in the last 10 years. Software, a key ingredient of all these, changes even faster. Sometimes, the software frameworks we use are no longer relevant by the time of release.


Digital Business Opportunities Surge With IoT-Based Sensors At The Edge

Sensor data from machines – wherever they are located – carries heightened importance in a pandemic-driven business environment of unpredictable starts and stops. That’s because it provides critical visibility into what’s going on within machines across the business. For example, Wallis reported a surge in customer inquiries about using IoT to accomplish maintenance tasks automatically, remotely, and safely. “Interest is high in IoT-enabled automation from organizations that want to get the job done with minimal employee risk and fewer productivity losses,” said Wallis. “Remote asset diagnostics and monitoring gives companies 24/7 visibility about machine performance, eliminating unnecessary physical maintenance calls. The same applies to procurement transparency, where sensors on items reduce the need for physical inspections.” But the benefits of IoT don’t stop there. Connected IoT-based data from machines was game-changing for a power generation company based in Italy, turning an essentially commoditized business into a value-based service that increased customer loyalty. Using SAP Internet of Things, SAP Edge Services, and SAP Predictive Maintenance and Service, the company brought data together from the edge, meaning machine performance at power plants worldwide, with data from various systems including supply chain, warehouse management, machine repair and maintenance.


Take back control of IT with cloud native IGA

Legacy solutions have painted themselves into the corner of maintaining a large amount of custom code. This makes upgrades costly, so they don’t happen. That means customers suffer by not being able to adopt new features, bug fixes and new capabilities to support their new business and compliance requirements. The primary reason why legacy software projects don’t get fully completed and go over budget is known as the 80/20 rule. Organizations can solve 80% of the problems or challenges they have with the software as it is, but everybody wants to solve that last 20%. And that 20% isn’t a quick fix – it takes 10 times the amount of time that first 80% took. Understandably, organizations want to try to tackle the more challenging problems, which always require high customization. It’s very difficult for organizations to maintain a highly customized code in their environments that the first generation of IGA products required. All those changes to the code will then need to be maintained. But modern IGA has learned from all the coding requirements of the past and now provides a much simpler way to give users different levels of access. The identity governance and administration market started with highly regulated businesses. However, all industries are now impacted.


How remote access technology is improving the world as we know it

Globalisation and a dramatic uptick in both the need and desire for remote working have resulted in a dispersed workforce — in which it is easy to lose both professional and personal connection But the unprecedented speed of digital transformation, technologies such as 5G and improving consumer hardware such as smartphones, means that the prompt adoption of Augmented Reality (AR) in remote support is rapidly coalescing to close the connection gap. ... AR can be used to upskill these employees, and train new ones. When onboarding a new member of staff, ensuring that the employee is aware of the correct protocols and procedures is often critical. For example, when a new employee is familiarising themselves with a machine, an AR-capable smartphone or tablet can provide relevant training to ensure it’s operated correctly. If this technology was not available, uncertainties could lead to a break in compliance, safety issues, or even increased downtime — all critical issues in multiple industries, including manufacturing.  Today, this technology goes beyond needing an AR-capable device to hand though. Features such as session recording and being able to take a screenshot of the live video stream are increasingly being used to create a pool of expert knowledge that is readily available on demand. 


Value vs Time: an Agile Contract Model

The cost of bug fixes is included in the price, so our interest is to have as few bugs as possible in our software. This is obviously great value for our customers, but also for users who will run into fewer bugs while using the software. To do this, we use the common agile practices and methodologies such as TDD (test-driven development), Pair Programing, Pull / merge request management, and a strict procedure of verification and human tests before releasing to the customer. Also, continuous improvement techniques such as retrospective meetings and a lot of training help us deploy higher quality software. We have a clear DoD (Definition of Done) shared with the customer for each User Story (which also covers the UX / UI mockups for each US), and the teams are autonomous in managing the implementation part, while respecting the DoD and a minimum level of quality that is guaranteed by the practices and processes listed. Including any bug-fix in the User Story development cost also has a commercial advantage for Zupit. Customers don’t always "digest" that bugs are part of the software development process and aren’t happy to pay the cost of fixing them. A model where the supplier takes care of this aspect helps us to convince customers about the quality of our work and to close contracts more easily.



Quote for the day:

"The role of leaders is not to get other people to follow them but to empower others to lead." -- Bill George