Welcome to the year 2025, suddenly pushed 60 months forward. It's time to make bold moves forward with technology. Those digital dreams that have been simmering on the back burner need to be brought forward -- and IT professionals need to step up and lead the way. Blackburn and his co-authors even have data that shows boldness with technology moves keeps businesses ahead of the game. Almost half of incumbent companies adopting new digital ways, 47%, saw revenue growth exceeding 10% annually over the past three years, versus 30% of their slower-to-adopt counterparts. To accelerate digital adoption and meet the needs of a suddenly changed world, the McKinsey analysts make a series of recommendations -- which again, mean new roles and leadership opportunities for IT professionals ... This is the time to simplify and focus to avoid being overwhelmed, the McKinsey team adds. "This is perhaps the first global crisis in which companies are in the position to collect and evaluate real-time data about their customers and what they are doing, or trying to do, during this time of forced virtualization.
Interestingly, researchers observed that the malware’s operators don’t seem interested in widescale infection. In fact, according to the firm’s telemetry, since 2016, only around 300 infection attempts were observed on Android devices — mainly in India, Vietnam, Bangladesh and Indonesia. Other infections, however, were found in Algeria, Iran and South Africa. And, several infections were found in Nepal, Myanmar and Malaysia. “Usually if malware creators manage to upload a malicious app in the legitimate app store, they invest considerable resources into promoting the application to increase the number of installations and thus increase the number of victims,” explained the researchers in the writeup. “This wasn’t the case with these newly discovered malicious apps. It looked like the operators behind them were not interested in mass spread. For the researchers, this was a hint of targeted APT activity.” The types of applications that the malware mimics include Flash plugins, cleaners and updaters.
Complexity is obvious when you look for it — for example, in Boeing’s 737 Max 8 design, the 500 percent increase in regulation in 25 years within the U.K. pensions industry, or the space shuttle Challenger disaster, which was preceded by warnings that were ignored because they were presented on a PowerPoint slide that has since become notorious for being so dense. Simplicity, however, is often there, hiding in plain sight. It’s not just companies such as Zentatix, dentsu X, and Tata Sons that exemplify it. Apple remains an almost perfect example of a company committed to simple and functional design, despite the back end of its actual product being fiendishly complex. As Philip Davies, a president of Siegle + Gale, told me: “Simplicity is the intersection between clarity and surprise.” This recognizes that simplicity sits neatly on a spectrum ranging from chaos and complication, all the way through to something too simplistic, and is the balancing corrective. Yes, you can have multiple product ranges, with many different iterations and requirements for design, software, manufacturing, sales, service, and so on
DataOps is already enabling businesses to transform their data management and data analytics processes. For example, like DevOps, DataOps lets teams easily spin up isolated, safe and disposable testing environments that allow them to experiment and innovate (Principle 12 of the Manifesto). However, while developers typically focus on applications with small test databases, data analysts and scientists may need to spin up a sandbox environment that includes applications along with terabytes or even hundreds of terabytes of data. By easily implementing intelligent DataOps strategies such as automation, cloning, predictive analytics and more, spinning up massive disposable data environments becomes possible. DataOps principles are also enabling businesses to act on their massive production datasets in ways that were unimaginable just a few years ago. For example, DreamWorks can now easily share the datasets of its films in development with teams of creative artists around the world, enabling rapid collaboration and dramatically shortening production times.
"Security at this point is a best effort scenario," one respondent commented, according to (ISC)2. "Speed has become the primary decision-making factor. This has led to more than a few conversations about how doing it insecurely will result in a worse situation than not doing it at all." One respondent summed up the factors that have contributed to an opportune situation for cybercriminals–most notably, the fact that 100% of staff are working from home before most organizations were really ready, (ISC)2 said. "COVID-19 hit us with all the necessary ingredients to fuel cybercrime … chaos caused by technical issues plaguing workers not used to [working from home], panic, and desire to 'know more' and temptation to visit unverified websites in search of up-to-the-minute information," the respondent said, according to (ISC)2. Also, remote workforce technology supported by vendors is driven by "new feature time to market and not security," the respondent continued, (ISC)2 said. Other issues the respondent cited were employees taking over responsibilities for COVID-19 affected coworkers who are unfamiliar with the process
Deep learning has a very strong dependence on massive training data compared to traditional machine learning methods, because these neurons, layers and every thing in it should get a correct value as its weight after epochs of training. Although everything will be different when it comes to real-world scenarios; it’s far far away from what we learnt. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. In addition The biggest benefit of transfer learning shows when the target data set is relatively small. In many of these cases, the model may be prone to overfitting, and data augmentation may not always solve the overall problem.
The test coach role is a fundamental part of Domain Oriented Testing (DOT). It’s a way of instilling into the team a sense of product quality, pride in their code combined with a particular way of working that results in a system that’s more in tune with the business domain and requirements. ... Overall, the test coach is a demanding, highly skilled role. You must have a good grasp of all the disciplines you’re “nudging” the stakeholders towards. You must have great people skills, or at least a knack of presenting things so that people realise you’re on their side, working with them. ... In this agile climate, for many organisations QA has become a dirty word. However unfairly deserved, for many people QA is now synonymous with waterfall, big bang integration, process overload with long forms to fill out, and a department separated from the developers, promoting a “sling it over the fence to the testers” approach to software delivery. But let’s be honest, a test coach’s purpose is very similar to that of QA: to introduce and maintain a process that gets the team focused on software quality.
Because blockchain technologies are uniquely suited to verifying, securing and sharing data, they’re ideal for managing multi-party, inter-organizational, and cross-border transactions. Over the past five years, enterprises across the globe have vetted the technology with thousands of proofs of concept, but live deployments have been slow to come because partners using blockchain as a shared ledger have to agree on IP rights, governance, and business models. Government regulations have also impeded its widespread use. It has taken the Covid-19 pandemic to push through the obstacles to blockchain adoption. The virus has revealed the weaknesses in our supply chains, our inability to deploy resources where they are most needed to address the pandemic, and difficulties in capturing and sharing the data needed to make rapid decisions in managing it. Blockchain solutions that have been under development for years have been repurposed and unleashed to address these challenges.
It's a long list of attack types, but understanding what's possible allows us to think about the threats our applications face. More importantly they provide an opportunity to think about defences and how we protect machine-learning systems: building better, more secure training sets, locking down ML platforms, and controlling access to inputs and outputs, working with trusted applications and services. Attacks are not the only risk: we must be aware of unintended failures -- problems that come from the algorithms we use or from how we've designed and tested our ML systems. We need to understand how reinforcement learning systems behave, how systems respond in different environments, if there are natural adversarial effects, or how changing inputs can change results. If we're to defend machine-learning applications, we need to ensure that they have been tested as fully as possible, in as many conditions as possible. The apocryphal stories of early machine-learning systems that identified trees instead of tanks, because all the training images were of tanks under trees, are a sign that these aren't new problems, and that we need to be careful about how we train, test, and deploy machine learning.
This article describes how to implement reactive REST APIs in Java with Quarkus rather than using synchronous endpoints. In order to do this, the Java classes CompletableFuture and CompletionStage are needed. The article explains how to use these classes and how to chain asynchronous method invocations including exception handling and timeouts. The first question you probably ask is, why should you change old habits and not use imperative code? After all implementing asynchronous code is rather unusual for some Java developers and requires a new thinking. I think the short answer is efficiency. I’ve run two load tests where I compared reactive code with imperative code. In both cases the response times of the reactive code was only half of the duration of the imperative code. While these tests are not representative for all types of scenarios, I think they demonstrate nicely the benefits of reactive programming.
Quote for the day:
"If liberty means anything at all, it means the right to tell people what they do not want to hear." -- George Orwell