Daily Tech Digest - May 03, 2021

Reinforcement learning competition pushes the boundaries of embodied AI

Creating reinforcement learning models presents several challenges. One of them is designing the right set of states, rewards, and actions, which can be very difficult in applications like robotics, where agents face a continuous environment that is affected by complicated factors such as gravity, wind, and physical interactions with other objects. This is in contrast to environments like chess and Go that have very discrete states and actions. Another challenge is gathering training data. Reinforcement learning agents need to train using data from millions of episodes of interactions with their environments. This constraint can slow robotics applications because they must gather their data from the physical world, as opposed to video and board games, which can be played in rapid succession on several computers. To overcome this barrier, AI researchers have tried to create simulated environments for reinforcement learning applications. Today, self-driving cars and robotics often use simulated environments as a major part of their training regime. “Training models using real robots can be expensive and sometimes involve safety considerations,” Chuang Gan, principal research staff member at the MIT-IBM Watson AI Lab, told TechTalks.


ONNX Standard And Its Significance For Data Scientists

ONNX standard aims to bridge the gap and enable AI developers to switch between frameworks based on the project’s current stage. Currently, the models supported by ONNX are Caffe, Caffe2, Microsoft Cognitive toolkit, MXNET, PyTorch. ONNX also offers connectors for other standard libraries and frameworks. “ONNX is the first step toward an open ecosystem where AI developers can easily move between state-of-the-art tools and choose the combination that is best for them,” Facebook had said in an earlier blog. It was specifically designed for the development of machine learning and deep learning models. It includes a definition for an extensible computation graph model along with built-in operators and standard data types. ONNX is a standard format for both DNN and traditional ML models. The interoperability format of the ONNX provides data scientists with the flexibility to chose their framework and tools to accelerate the process, from the research stage to the production stage. It also allows hardware developers to optimise deep learning-focused hardware based on a standard specification compatible with different frameworks.


Microsoft warns of damaging vulnerabilities in dozens of IoT operating systems

According to an overview compiled by the Cybersecurity and Infrastructure Security Agency, 17 of the affected product already have patches available, while the rest either have updates planned or are no longer supported by the vendor and won’t be patched. See here for a list of impacted products and patch availability. Where patching isn’t available, Microsoft advises organizations to implement network segmentation, eliminate unnecessary to operational technology control systems, use (properly configured and patched) VPNs with multifactor authentication and leverage existing automated network detection tools to monitor for signs of malicious activity. While the scope of the vulnerabilities across such a broad range of different products is noteworthy, such security holes are common with connected devices, particularly in the commercial realm. Despite billions of IoT devices flooding offices and homes over the past decade, there remains virtually no universally agreed-upon set of security standards – voluntary or otherwise – to bind manufacturers. As a result, the design and production of many IoT products end up being dictated by other pressures, such as cost and schedule.


Automate the hell out of your code

Continuous integration is a software development principle that suggests that developers should write small chunks of code and when they push this code to their repository the code should be automatically tested by a script that runs on a remote machine, automating the process of adding new code to the code base. This automates software testing thus increasing the developers productivity and keeping their focus on writing code that passes the tests. ... If continuous integration is adding new chunks of code to the code base, then CD is about automating the building and deploying our code to the production environment, this ensures that the production environment is kept in sync with the latest features in the code base. You can read this article for more on CI/CD. I use firebase hosting, so we can define a workflow that builds and deploys our code to firebase hosting rather than having to do that ourselves. But we have one or two issues we have to deal with, normally we can deploy code to firebase from our computer because we are logged in from the terminal, but how do we authorize a remote CI server to do this? open up a terminal and run the following command firebase login:ci it will throw back a FIREBASE_TOKEN that we can use to authenticate CI servers.


15 open source GitHub projects for security pros

For dynamic analysis of a Linux binary, malicious or benign, PageBuster makes it super easy to retrieve dumps of executable pages within packed Linux processes. This is especially useful when pulling apart malware packed specialized run-time packers that introduce obfuscation and hamper static analysis. “Packers can be of growing complexity, and, in many cases, a precise moment in time when the entire original code is completely unpacked in memory doesn't even exist,” explains security engineer Matteo Giordano in a blog post. PageBuster also takes caution to conduct its page dumping activities carefully to not trigger any anti-virtual machine or anti-sandboxing defences present in the analyzed binary. ... The free AuditJS tool can help JavaScript and NodeJS developers ensure that their project is free from vulnerable dependencies, and that the dependencies of dependencies included in their project are free from known vulnerabilities. It works by peeking into what’s inside your project’s manifest file, package.json. “The great thing about AuditJS is that not only will it scan the packages in your package.json, but it will scan all the dependencies of your dependencies, all the way down. ...” said developer Dan Miller in a blog post.


Securing the Future of Remote Working With Augmented Intelligence

Emerging technology can potentially reshape the dimensions of organizations. Augmented reality, as well as virtual reality, will play a crucial role in office design trends that have already come into being. Architecture organizations are already dedicating a space for virtual reality. This is basically an area that is equipped with all the essential requirements of virtual reality. An immense amount of businesses are prone to taking this step as more meetings are held virtually to accommodate the spread-out workforce. Organizations are currently spending a hefty amount on virtual solutions and will continue to invest in the future. Director of business development at HuddleCamHD, Paul Richards, affirmed that “Numerous meeting rooms will become more similar to TV production studios instead of collaborative spaces. Erik Narhi, an architect as well as computational design lead at the Los Angeles office of global design company Buro Happold also agreed that in this current era, it is impossible to neglect augmented reality and virtual reality.” Hybrid work for home is not going anytime soon.


Risk-based vulnerability management has produced demonstrable results

Risk-based vulnerability management doesn’t ask “How do we fix everything?” It merely asks, “What do we actually need to fix?” A series of research reports from the Cyentia Institute have answered that question in a number of ways, finding for example, that attackers are more likely to develop exploits for some vulnerabilities than others. Research has shown that, on average, about 5 percent of vulnerabilities actually pose a serious security risk. Common triage strategies, like patching every vulnerability with a CVSS score above 7 were, in fact, no better than chance at reducing risk. But now we can say that companies using RBVM programs are patching a higher percentage of their high-risk vulnerabilities. That means they are doing more, and there’s less wasted effort. The time it took companies to patch half of their high-risk vulnerabilities was 158 days in 2019. This year, it was 27 days. And then there is another measure of success. Companies start vulnerability management programs with massive backlogs of vulnerabilities, and the number of vulnerabilities only grows each year. Last year, about two-thirds of companies using a risk-based system reduced their vulnerability debt or were at least treading water. This year, that number rose to 71 percent.


A definitive primer on robotic process automation

This isn’t to suggest that RPA is without challenges. The credentials enterprises grant to RPA technology are a potential access point for hackers. When dealing with hundreds to thousands of RPA robots with IDs connected to a network, each could become an attack vessel if companies fail to apply identity-centric security practices. Part of the problem is that many RPA platforms don’t focus on solving security flaws. That’s because they’re optimized to increase productivity and because some security solutions are too costly to deploy and integrate with RPA. Of course, the first step to solving the RPA security dilemma is recognizing that there is one. Realizing RPA workers have identities gives IT and security teams a head start when it comes to securing RPA technology prior to its implementation. Organizations can extend their identity and governance administration (IGA) to focus on the “why” behind a task, rather than the “how.” Through a strong IGA process, companies adopting RPA can implement a zero trust model to manage all identities — from human to machine and application.


Demystifying Quantum Computing: Road Ahead for Commercialization

For CIOs determining what the next steps for quantum computing are, they must first consider the immediate use cases for their organization and how investments in quantum technology can pay dividends. For example, for an organization prioritizing accelerated or complex simulations, whether it’s for chemical or critical life sciences research like drug discovery, the increase in computing performance that quantum offers can make all the difference. For some organizations, immediate needs may not be as defined, but there could be an appetite to simply experiment with the technology. As many companies already put a lot behind R&D for other emerging technologies, this can be a great way to play around with the idea of quantum computing and what it could mean for your organization. However, like all technology, investing in something simply for the sake of investing in it will not yield results. Quantum computing efforts must map back to a critical business or technology need, not just for the short term, but also the long term as quantum computing matures. CIOs must also consider how the deployment of the technology changes existing priorities, particularly around efforts such as cybersecurity. 


Leaders Talk About the Keys to Making Change Successful and Sustainable

Many organizations that have been around for a while have established processes that are hard to change. Mitch Ashley, CEO and managing analyst at Accelerated Strategies Group, who’s helped create several DevOps organizations, shared his perspective about why changing a culture can be so difficult. “Culture is a set of behaviors and norms, and also what’s rewarded in an organization. It’s both spoken and unspoken. When you’re in an organization for a period of time, you get the vibe pretty quickly. It’s a measurement culture, or a blame culture, or a performance culture, or whatever it is. Culture has mass and momentum, and it can be very hard to move. But, you can make cultural changes with work and effort.” What Mitch is referring to, this entrenched culture that can be hard to change, is sometimes called legacy cultural debt. I loved Mitch’s story about his first foray into DevOps because it’s a great place to start if you’re dealing with a really entrenched legacy culture. He and his team started a book club, and they read The Phoenix Project. He said, “The book sparked great conversations and helped us create a shared vision and understanding about our path to DevOps. ...”



Quote for the day:

"The secret of leadership is simple: Do what you believe in. Paint a picture of the future. Go there. People will follow." -- Seth Godin

Daily Tech Digest - May 02, 2021

In the Race to Hundreds of Qubits, Photons May Have "Quantum Advantage"

The more qubits are quantum-mechanically connected entangled together, the more calculations they can simultaneously perform. A quantum computer with enough qubits could in theory achieve a “quantum advantage” enabling it to grapple with problems no classical computer could ever solve. For instance, a quantum computer with 300 mutually-entangled qubits could theoretically perform more calculations in an instant than there are atoms in the visible universe. Ostensible quantum computing advantages aside, relative advantages of one quantum computing platform versus another are less clear. The quantum computers that tech giants such as Google, IBM and Intel are investigating typically rely on qubits based either on superconducting circuits or trapped ions. Those systems typically require expensive and elaborate cryogenics, keeping them just a few degrees (sometimes mere fractions of a single degree) above absolute zero. The expensive, bulky systems needed to keep qubits at such frigid temperatures can make it extraordinary challenging to scale these platforms up to high numbers of qubits.


Move Over Artificial Intelligence, There Is A New AI In Town

The lack of Data Scientists has caused many training efforts to focus on teaching the core of algorithms and enabling people from all walks of life to build artificial intelligence solutions via products that democratize or automate data science. What we need for Augmented Intelligence is different. We need people who are subject matter experts in their fields, like doctors, to understand just enough Artificial Intelligence to work collaboratively with one. This means they must have a level of Artificial Intelligence Literacy. AI Literacy can help individuals understand the core concepts of how artificial intelligence works, the context to understand its strengths and weaknesses in their application, the capabilities to apply their understanding to solve problems, and the creativity to see how to innovate with it for their domain. Why are all four of these Cs important? Augmented Intelligence is about combining the intelligence of humans and machines, where both contribute, rather than humans becoming the caretakers of the machines. This requires the human to not just understand the concepts and have the capability to apply them in a specific context, but also to apply human creativity to envision new uses of the human/machine combo.


Low-code and no-code is shifting the balance between business and technology professionals

IT departments are trying to balance two things. On one side, they see a growing interest from business experts to solve their own workgroup-level problems themselves. On the other hand, they want to maintain control and governance over any software created in the organization. It's often the application development managers, struggling with never-ending backlogs and short-staffing who are most bullish on enterprise low-code -- they see a way to address both of these sets of demands. With low-code and no-code, they can give business units skill-appropriate tools to solve some of their own problems, while ensuring that anything they build goes through a centralized process for quality and security - the same process their enterprise software development goes through. ... This wave of low-code adoption is nothing but good news for traditional software developers. In our customer base, developers get to deliver solutions faster, avoid rework and technical debt, and elevate the problem space they operate in. That is, they get to work on harder, more interesting software problems - say software architecture, or working through the creation of complex logic.


Cyber Extortion Thriving Thanks to Accellion FTA Hits

Some ransomware gangs run their own attacks, but many operations now function using a ransomware-as-a-service model, in which operators develop code and infrastructure and affiliates infect victims. For every victim who pays, the operator and affiliate split the profits, with affiliates often keeping 60% to 70%. Experts say this division of labor has helped RaaS operations maximize profits - especially if they can recruit highly skilled affiliates. The type of ransomware most encountered by victims assisted by Coveware in Q1 was Sodinokibi, aka REvil, followed by Conti, Lockbit, Clop and Egregor. All are prolific RaaS operations. But competition remains fierce between RaaS operations as they attempt to recruit top affiliates to maximize their paydays, including via big game hunting, which is hitting larger victims for the prospect of bigger ransom returns. Seeking fresh avenues for finding new victims, some RaaS operations have begun running campaigns using malware written to crypto-lock Unix and Linux systems. Defray777, Mespinoza, Babuk, Nephilim and Darkside have already deployed such code, and Sodinokibi suggests it will do so, Coveware says.


Lessons in simplicity strategy

Six, as I have written before, is a useful organizing number, and is the smallest in a range of numbers described in mathematics as “perfect.” A number is perfect if it is a positive integer that is equal to the sum of its divisors. Six, of course, is the sum of one, two, and three. Six is also workable, definable, measurable, and memorable. If you adopt a small-is-better mentality (I love the two-pizza rule, which says if your working group can’t be fed by two pizzas, it’s probably too large), six gives you a guideline that can be established and maintained fairly quickly. The hexagon, nature’s diamond, lends itself beautifully to organizational management because of the way it embodies interconnection, resilience, and economy. Like the equilateral triangle and the square, the hexagon tessellates, which is to say it can connect to the same shape without gaps (unlike, say, circles, an all-too-popular PowerPoint intersecting image). This is crucial, because so much of what people do intersects and connects. The hexagon is a powerful visual aid and connects us to network theory in which “edges” play a crucial role.


Lawmakers Seek to Expand CISA's Role

A five-year national risk management cycle review by CISA, as called for by Hassan and Sasse, is needed to better address threats to critical infrastructure, says Tim Wade, a former network and security technical manager with the U.S. Air Force. He's now a technical director at the security firm Vectra AI. "Failure to have a credible and timely recovery strategy places nontrivial strain on detection and response requirements, whereas protecting and enabling rapid recovery removes tension from the entire system," Wade says. "This move marks a step in the right direction, and even as the road ahead is long, we all have a vested interest in its success." The various Congressional proposals regarding CISA could go a long way toward addressing threats to IT and operation technology networks, says Joseph Carson, chief security scientist and advisory CISO at security firm Thycotic. "One of the most vital areas to focus on is regaining visibility and control of the network as a whole, including the disparate IT and OT systems. In particular, this means having a firm command of how systems are accessed," Carson says.
If you step back and think about the conversation as an opportunity to learn versus the need to defend, it helps open the aperture into a dialogue vs a debate. Somewhere along life’s path (we usually refer to this as getting older) learning is replaced with knowledge, yet if we make the choice to continuously learn from other’s perspectives, learning can be lifelong, and knowledge can grow vs. sustain. Consider that openness to experience—the degree to which you are interested in exploring new ideas, nurturing your hungry mind, and replacing routine with unconventional and unfamiliar adventures—decreases as we get older. The more we know, the less interested we are in learning something new. As Lisa Feldman Barrett notes in her recent book, our brains are not for thinking: they are for saving energy and turning decision into autopilot mode. It’s okay to want to listen and learn and still hold onto your own beliefs and values. The act of listening doesn’t indicate agreement. In fact, it is a lot easier to agree when we don’t listen to one another. Remember that the difference between judging and pre-judging is understanding and that in order to understand you really need to be willing to listen and learn.


Australia's eSafety and the uphill battle of regulating the ever-changing online realm

Appearing before the Parliamentary Joint Committee on Intelligence and Security as part of its inquiry into extremist movements and radicalism in Australia, Inman Grant said while the threshold is quite high in the new powers around take-down requests, it will give her agency a fair amount of leeway to look at intersectional factors, such as the intent behind the post. "I think that the language is deliberately -- it's constrained in a way to give us some latitude ... we have to look at the messenger, we have to look at the message, and we have to look at the target," she said on Thursday. The Act also will not apply to groups of people, rather simply individuals. The commissioner guessed this was due to striking a balance on freedom of expression. "To give us a broader set of powers to target a group or target in mass, I think would probably raise a lot more questions about human rights," she said. She said it's a case of "writing the playbook" as it unfolds, given there's no similar law internationally to help guide the Act. Inman Grant said she has tried to set expectations that she isn't about to conduct "large scale rapid fire".


Corporate e-waste: The unfashionable global crisis

Ensuring redundant business technology is reused is an important way to reduce our environmental impact. When a device reaches the end of its first lifecycle and a business needs to upgrade, that device still holds value, both to the company and to a second user. Giving a device a second life reduces carbon emissions and electronic waste. Every laptop that is reused, displaces the need to remanufacture a new one, also saving natural resources. We like to say that every time we rehome a device, we’re saving the planet one laptop at a time. Dumping old devices also represents a wasted opportunity to help people access used IT equipment at more affordable prices. Not everyone needs or can afford new tech, so a vibrant secondhand market is crucial to closing the digital divide. Plus, while the disposal of IT equipment is hassle and an expense for businesses, ensuring old devices are reused gives equipment extra value – a value which can be used against the cost of purchasing new IT devices. With organisations faced with accelerating the shift to mobile devices, this could free up much needed cash to fund digital transformation projects


The Biggest Data Management Mistake Chief Data Officers Make

Most advice to Chief Data Officers in these situations comes down to this: Ensure that your data strategy provides business value — e.g., increasing revenue and improving cost control — and risk management – e.g., inclusive of compliance and privacy. While this may seem like the right advice, it puts the onus on the CDO to propose business value to the rest of the C-suite instead of supporting the initiatives in which leaders already have invested. These business initiatives require data and analytics that the CDO can provide. But if CDOs initiate their own projects and separate business value propositions, the existing business initiatives are often left without the data management platform they require. This results in a divergence of projects that don’t support each other: the business initiatives will continue to generate data while the CDO builds a “foundation” of data, creating yet another silo. The difference between proposing and supporting business value may seem subtle, but it’s actually profound. Most IT leaders running enterprise database management today are building up programs that have value independent of major business initiatives.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead." -- John Paul Warren

Daily Tech Digest - May 01, 2021

Is Open Source More Secure Than Closed Source?

Open source software offers greater transparency to the teams that use it; visibility into both the code itself and how it is maintained. Giving organizations access to the source code allows them the opportunity to evaluate the security of the code for themselves. Additionally, users have more visibility into how and what changes are made to the code base, including the pre-release review process, how often dependencies are updated and how developers and organizations respond to security vulnerabilities. As a result, open source software users have a more complete picture of the overall security of the software they’re using. Another major benefit is found in the communities which drive the growth and development of open source software. The vast majority of open source software is backed by communities of forward-thinking developers, many of whom use the same software they build and maintain as a primary means of communicating with team members. Open source developers and the communities around the software value users’ input to a significant degree, and many user suggestions end up getting incorporated into new versions.


Let’s Not Regulate A.I. Out of Existence

A.I. is being used to analyze vast amounts of space data and is having an enormous impact on health care. A.I. image and scan analysis are, for example, helping doctors identify breast and colon cancer. It’s also showing potential in vaccine creation. I guarantee that A.I. will someday save lives. It’s those kinds of A.I.-driven data analysis that gets shoved aside by news of an A.I. beating a world-champion GO player or the world’s best-known entrepreneur raising alarms about a situation where “A.I. is vastly smarter than humans.” That kind of fear-mongering leads consumers, who don’t understand the differences between A.I. that scans a crowd of 10,000 faces for one suspect and one that can create recipes based on pleasing ingredient combinations, to mistrust all A.I., and to write the kind of stifling regulation produced by the EU. Even if you still think the negatives outweigh the benefits, we’ll arguably need better and bigger A.I. to manage and sift through the mountains of data we produce every single day. To deny A.I.’s role in this is like saying we don’t need garbage collection services and that our debris can just pile up on street corners indefinitely.


AutoNLP: Automatic Text Classification with SOTA Models

AutoNLP is a tool to automate the process of creating end-to-end NLP models. AutoNLP is a tool developed by the Hugging Face team which was launched in its beta phase in March 2021. AutoNLP aims to automate each phase that makes up the life cycle of an NLP model, from training and optimizing the model to deploying it. “AutoNLP is an automatic way to train and deploy state-of-the-art NLP models, seamlessly integrated with the Hugging Face ecosystem.” — AutoNLP team One of the great virtues of AutoNLP is that it implements state-of-the-art models for the tasks of binary classification, multi-class classification, and entity recognition, supported in 8 languages ​​which are: English, German, French, Spanish, Finnish, Swedish, Hindi, and Dutch. Likewise, AutoNLP takes care of the optimization and fine-tuning of the models. In the security and privacy part, AutoNLP implements data transfers protected under SSL, also the data is private to each user account. As we can see, AutoNLP emerges as a tool that facilitates and speeds up the process of creating NLP models. In the next section, will see how the experience was like from start to finish when creating a text classification model using AutoNLP.


5 Reasons Why Artificial Intelligence Won’t Replace Physicians

Even if the array of technologies offered brilliant solutions, it would be difficult for them to mimic empathy. Why? Because at the core of compassion, there is the process of building trust: listening to the other person, paying attention to their needs, expressing the feeling of understanding and responding in a manner that the other person knows they were understood. At present, you would not trust a robot or a smart algorithm with a life-altering decision; or even with a decision whether or not to take painkillers, for that matter. We don’t even trust machines in tasks where they are better than humans – like taking blood samples. We will need doctors holding our hands while telling us about a life-changing diagnosis, their guide through therapy and their overall support. An algorithm cannot replace that. ... More and more sophisticated digital health solutions will require qualified medical professionals’ competence, no matter whether it’s about robotics or A.I. The human brain is so complex and able to oversee such a vast scale of knowledge and data that it merely is not worth developing an A.I. that takes over this job – the human brain does it so well. It is more worthwhile to program those repetitive, data-based tasks, and leave the complex analysis/decision to the person.


Mimicking the brain: Deep learning meets vector-symbolic AI

Machines have been trying to mimic the human brain for decades. But neither the original, symbolic AI that dominated machine learning research until the late 1980s nor its younger cousin, deep learning, have been able to fully simulate the intelligence it’s capable of. One promising approach towards this more general AI is in combining neural networks with symbolic AI. In our paper “Robust High-dimensional Memory-augmented Neural Networks” published in Nature Communications, we present a new idea linked to neuro-symbolic AI, based on vector-symbolic architectures. We’ve relied on the brain’s high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. Specifically, we wanted to combine the learning representations that neural networks create with the compositionality of symbol-like entities, represented by high-dimensional and distributed vectors. The idea is to guide a neural network to represent unrelated objects with dissimilar high-dimensional vectors. In the paper, we show that a deep convolutional neural network used for image classification can learn from its own mistakes to operate with the high-dimensional computing paradigm, using vector-symbolic architectures.


How to master manufacturing's data and analytics revolution

The Manufacturing Data Excellence Framework, developed by a community of companies hosted by the World Economic Forum’s Platform for Shaping the Future of Advanced Manufacturing and Production, serves this purpose. We introduced this framework, comprising 20 different dimensions with five different maturity levels, in our recent white paper, “Data Excellence: Transforming manufacturing and supply systems”. “One of the challenges we face when discussing the industry transformation towards data ecosystems is the lack of commonality of terminology. It’s very powerful to have a tool in which we have created common definitions and explanations, and around which we can build the foundations towards data sharing excellence in manufacturing,” says Niall Murphy, CEO and Co-founder of EVRYTHNG. The first step is an assessment of the status quo using the framework. Companies will be able to objectively assess their maturity in implementing applications and technological and organizational enablers. They will then be able to compare their individual maturity versus the benchmark and define their individual target state.


Ethics of AI: Benefits and risks of artificial intelligence

Ethical issues take on greater resonance when AI expands to uses that are far afield of the original academic development of algorithms. The industrialization of the technology is amplifying the everyday use of those algorithms. A report this month by Ryan Mac and colleagues at BuzzFeed found that "more than 7,000 individuals from nearly 2,000 public agencies nationwide have used technology from startup Clearview AI to search through millions of Americans' faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members." Clearview neither confirmed nor denied BuzzFeed's' findings. New devices are being put into the world that rely on machine learning forms of AI in one way or another. For example, so-called autonomous trucking is coming to highways, where a "Level 4 ADAS" tractor trailer is supposed to be able to move at highway speed on certain designated routes without a human driver. A company making that technology, TuSimple, of San Diego, California, is going public on Nasdaq. In its IPO prospectus, the company says it has 5,700 reservations so far in the four months since it announced availability of its autonomous driving software for the rigs.


Dale Vince has a winning strategy for sustainability

Fundamentally, it’s more economic to do the right thing than the wrong thing. Renewable energy, for example, is a great democratizing force in world affairs because the wind and the sun are available to every country on the planet, whereas oil and gas are not. We fight wars over oil and gas quite literally because it’s such a precious resource. And here in Britain, we spend £55 billion [US$76 billion] every year buying fossil fuels from abroad to bring them here to burn them. And if we spent that money on wind and solar machines instead, we could make our own electricity, create jobs, and be independent from fluctuating global fossil fuel markets and currency exchanges. We can create a stronger, more resilient economy, as well as a cleaner one. ... I think businesses historically reinvent themselves. They move with the times or they die, and that’s a natural order of things. And some businesses just get left behind because their business model becomes outdated. A nimble, adaptive business will move from the old way of doing things and will still be here. 


A Deeper Dive into the DOL’s First-of-Its-Kind Cybersecurity Guidance

ERISA’s duty of prudence requires fiduciaries to act “with the care, skill, prudence, and diligence under the circumstances then prevailing that a prudent man acting in a like capacity and familiar with such matters would use in the conduct of an enterprise of a like character and with like aims.” It has become generally accepted that ERISA fiduciaries have some responsibility to mitigate the plan’s exposure to cybersecurity events. But, prior to this guidance, it was not clear what the DOL considered to be prudent with respect to addressing cybersecurity risks associated, including those related to identity theft and fraudulent withdrawals. Each of the three new pieces of guidance addresses a different audience. The first, Tips for Hiring a Service Provider with Strong Cybersecurity Practices (Tips for Hiring a Service Provider), provides guidance for plan fiduciaries when hiring a service provider, such as a recordkeeper, trustee, or other provider that has access to a plan’s nonpublic information. The second, Cybersecurity Program Best Practices (Cybersecurity Best Practices), is, as the name indicates, a collection of best practices for recordkeepers and other service providers, and may be viewed as a reference for plan fiduciaries when evaluating service providers’ cybersecurity practices. The third, Online Security Tips (Online Security Tips), contains online security advice for plan participants and beneficiaries. We have summarized each piece of guidance below along with our key observations.


Less complexity, more control: The role of multi-cloud networking in digital transformation

The panellists agreed that it means going back to layer by layer design principles with clean APIs up and down the protocol stack from application to the lowest levels of connectivity. Without such design rigour, programming or operator errors in a complex highly distributed system could have profound consequences. Cisco’s Pandey says that while it appeared “horribly scary” in terms of connectivity to take monolithic apps and make them cloud-native, the upside is that the resulting discrete components of the application can be swapped out or taken down with fewer consequences to the rest of the system and ultimately to customers. But, he warned, “you need to have the tools and capabilities to monitor it – the full-stack observability piece. You need to have discoverability and you need to have security at the API layer all the way down so that you can manage things properly”. His comments were echoed by Alkira’s Khan, who pointed out that the problems of a distributed architecture are particularly acute for enterprises trying to apply a security posture in a multi-cloud environment.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." - William Pollard

Daily Tech Digest - April 30, 2021

Tech to the aid of justice delivery

Obsolete statutes which trigger unnecessary litigation need to be eliminated as they are being done currently with over 1,500 statutes being removed in the last few years. Furthermore, for any new legislation, a sunset review clause should be made a mandatory intervention, such that after every few years, it is reviewed for its relevance in the society. A corollary to this is scaling decriminalisation of minor offences after determining as shown by Kadish SH in his seminal paper ‘The Crisis of Overcriminalization’, whether the total public and private costs of criminalisation outweigh the benefits? Non-compliance with certain legal provisions which don’t involve mala fide intent can be addressed through monetary compensation rather than prison time, which inevitably instigates litigation. Finally, among the plethora of ongoing litigations in the Indian court system, a substantial number are those that don’t require interpretation of the law by a judge, but simply adjudication on facts. These can take the route of ODR, which has the potential for dispute avoidance by promoting legal education and inducing informed choices for initiating litigation and also containment by making use of mediation, conciliation or arbitration, and resolving disputes outside the court system.


Leading future-ready organizationsTo break through these barriers to Agile, companies need a restart. 

They need to continue to expand on the initial progress they’ve made but focus on implementing a wider, more holistic approach to Agile. Every aspect of the organization must be engaged in an ongoing cyclical process of “discover and evaluate, prioritize, build and operate, analyze…and repeat.” ... Organizations that leverage digital decoupling are able to get on independent release cycles and unlock new ways of working with legacy systems. Based on our work with clients, we’ve seen that this can result in up to 30% reduction in cost of change, reduced coordination overhead, and increased speed of planning and pace of delivery. ... In our work with clients, we see firsthand how cross-functional teams and automation of application delivery and operations contributes to increased pace of delivery, improved employee productivity, and up to 30% reduction in deployment time. Additionally, scaling DevOps enables fast and reliable releases of new features to production within short iterations and includes optimizing processes and upskilling people, which is the starting point for a collaborative and liquid enterprise. .... Moving talent and partners into a non-hierarchal and blended talent sourcing and management model can result in 10-20% increase in capacity.


F5 Big-IP Vulnerable to Security-Bypass Bug

The vulnerability specifically exists in one of the core software components of the appliance: The Access Policy Manager (APM). It manages and enforces access policies, i.e., making sure all users are authenticated and authorized to use a given application. Silverfort researchers noted that APM is sometimes used to protect access to the Big-IP admin console too. APM implements Kerberos as an authentication protocol for authentication required by an APM policy, they explained. “When a user accesses an application through Big-IP, they may be presented with a captive portal and required to enter a username and password,” researchers said, in a blog posting issued on Thursday. “The username and password are verified against Active Directory with the Kerberos protocol to ensure the user is who they claim they are.” During this process, the user essentially authenticates to the server, which in turn authenticates to the client. To work properly, KDC must also authenticate to the server. KDC is a network service that supplies session tickets and temporary session keys to users and computers within an Active Directory domain.


4 Business Benefits of an Event-Driven Architecture (EDA)

Using an event-driven architecture can significantly improve developmental efficiency in terms of both speed and cost. This is because all events are passed through a central event bus, which new services can easily connect with. Not only can services listen for specific events, triggering new code where appropriate, but they can also push events of their own to the event bus, indirectly connecting to existing services. ... If you want to increase the retention and lifetime value of customers, improving your application’s user experience is a must. An event-driven architecture can be incredibly beneficial to user experience (albeit indirectly) since it encourages you to think about and build around… events! ... Using an event-driven architecture can also reduce the running costs of your application. Since events are pushed to services as they happen, there’s no need for services to poll each other for state changes continuously. This leads to significantly fewer calls being made, which reduces bandwidth consumption and CPU usage, ultimately translating to lower operating costs. Additionally, those using a third-party API gateway or proxy will pay less if they are billed per-call.


Gartner says low-code, RPA, and AI driving growth in ‘hyperautomation’

Gartner said process-agnostic tools such as RPA, LCAP, and AI will drive the hyperautomation trend because organizations can use them across multiple use cases. Even though they constitute a small part of the overall market, their impact will be significant, with Gartner projecting 54% growth in these process-agnostic tools. Through 2024, the drive toward hyperautomation will lead organizations to adopt at least three out of the 20 process-agonistic types of software that enable hyperautomation, Gartner said. The demand for low-code tools is already high as skills-strapped IT organizations look for ways to move simple development projects over to business users. Last year, Gartner forecast that three-quarters of large enterprises would use at least four low-code development tools by 2024 and that low-code would make up more than 65% of application development activity. Software automating specific tasks, such as enterprise resource planning (ERP), supply chain management, and customer relationship management (CRM), will also contribute to the market’s growth, Gartner said.


When cryptography attacks – how TLS helps malware hide in plain sight

Lots of things that we rely on, and that are generally regarded as bringing value, convenience and benefit to our lives…can be used for harm as well as good. Even the proverbial double-edged sword, which theoretically gave ancient warriors twice as much fighting power by having twice as much attack surface, turned out to be, well, a double-edged sword. With no “safe edge” at the rear, a double-edged sword that was mishandled, or driven back by an assailant’s counter-attack, became a direct threat to the person wielding it instead of to their opponent. ... The crooks have fallen in love with TLS as well. By using TLS to conceal their malware machinations inside an encrypted layer, cybercriminals can make it harder for us to figure out what they’re up to. That’s because one stream of encrypted data looks much the same as any other. Given a file that contains properly-encrypted data, you have no way of telling whether the original input was the complete text of the Holy Bible, or the compiled code of the world’s most dangerous ransomware. After they’re encrypted, you simply can’t tell them apart – indeed, a well-designed encryption algorithm should convert any input plaintext into an output ciphertext that is indistinguishable from the sort of data you get by repeatedly rolling a die.


Decoupling Software-Hardware Dependency In Deep Learning

Working with distributed systems, data processing such as Apache Spark, Distributed TensorFlow or TensorFlowOnSpark, adds complexity. The cost of associated hardware and software go up too. Traditional software engineering typically assumes that hardware is at best a non-issue and at worst a static entity. In the context of machine learning, hardware performance directly translates to reduced training time. So, there is a great incentive for the software to follow the hardware development in lockstep. Deep learning often scales directly with model size and data amount. As training times can be very long, there is a powerful motivation to maximise performance using the latest software and hardware. Changing the hardware and software may cause issues in maintaining reproducible results and run up significant engineering costs while keeping software and hardware up to date. Building production-ready systems with deep learning components pose many challenges, especially if the company does not have a large research group and a highly developed supporting infrastructure. However, recently, a new breed of startups have surfaced to address the software-hardware disconnect.


4 tips for launching a successful data strategy

Your business partners know that data can be powerful, and they know that they want it, but they do not always know, specifically, what data they need and how to use it. The IT organization knows how to collect, structure, secure, and serve up the data, but they are not typically responsible for defining how best to leverage the data. This gap between serving up the data and using the data can be as wide as the Ancient Mariner’s ocean (sorry), over which the CIO needs to build a bridge. ... But how do we attract those brilliant data scientists who can build the data dashboard straw man? To counter the challenge of a really tight market for these rare birds, Nick Daffan, CIO of Verisk Analytics, suggests giving data scientists what we all want: interesting work that creates an impact. “Data scientists want to get their hands on data that has both depth and breadth, and they want to work with the most advanced tools and methods," Daffan says. "They also want to see their models implemented, which means being able to help their business partners and customers use the data in a productive way.”


How to boost internal cyber security training

A big part of maintaining engagement among staff when it comes to cyber security is explaining how the consequences of insufficient protection could affect employees in particular. “Unless individuals feel personally invested, they tend not to concern themselves with the impact of a breach,” said James Spiteri, principal security specialist at Elastic. “Provide training that moves beyond theory and shows the risks and implications through actual practice to help engage the individual. For example, simulating an attack to show how an insecure password or bad security hygiene on personal accounts can lead to unwanted access of people’s personal information such as photos or payment details could be very effective in changing behaviours. “Teams need to find relatable tools to help break down the complexities of cyber security. Showcasing cyber security problems through relatable items like phones, and everyday situations such as connecting to public Wi-fi, can help spread awareness of employees’ digital footprint and how easy it is to spread information without being aware of it.”


Shedding light on the threat posed by shadow admins

Threat actors seek shadow admin accounts because of their privilege and the stealthiness they can bestow upon attackers. These accounts are not part of a group of privileged users, meaning their activities can go unnoticed. If an account is part of an Active Directory (AD) group, AD admins can monitor it, and unusual behaviour is therefore relatively straightforward to pinpoint. However, shadow admins are not members of a group since they gain a particular privilege by a direct assignment. If a threat actor seizes control of one of these accounts, they immediately have a degree of privileged access. This access allows them to advance their attack subtly and craftily seek further privileges and permissions while escaping defender scrutiny. Leaving shadow admin accounts on an organization’s AD is a considerable risk that’s best compared to handing over the keys to one’s kingdom to do a particular task and then forgetting to track who has the keys and when to ask for it back. It pays to know who exactly has privileged access, which is where AD admin groups help. Conversely, the presence of shadow admin accounts could be a sign that an attack is underway.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - April 29, 2021

Why the Age of IIoT Demands a New Security Paradigm

Perhaps the most dangerous and potentially prolific security threats are employees, experts contend. “We fear Russia in terms of cybersecurity breaches, but the good-hearted employee is the most dangerous,” says Greg Baker, vice president and general manager for the Cyber Digital Transformation organization at Optiv, a security systems integrator. “The employee that tries to stretch their responsibilities by updating a Windows XP workstation to Windows 10 and shuts the factory down—they’re the most dangerous threat actor.” Historically, security of OT environments has been addressed by preventing connectivity to outside sources or walling off as much as possible from the internet using a strategy many refer to as an “air gap.” With the latter approach, firewalls are the focal point of the security architecture, locking down an automation environment, perhaps in a specific building, to prevent external access as opposed to a strategy predicated on securing individual endpoints on the industrial network such as HMIs or PLCs. “We used to live in a world that was protected—you didn’t need to put a lock on your jewelry drawer because you had a huge fence around the property and no one was getting in,” explains John Livingston


9 unexpected skills you need for today's tech team

Pekelman said that being adaptable is also crucial. "More than ever, teams need to be agile and flexible—as we've learned, things can truly change in a very short period of time," he said. Nathalie Carruthers, executive vice president and chief HR officer at Blue Yonder, agreed that change, innovation and transformation are the only constants in the tech world. "We look for candidates who can adapt to this constant change and who have a passion for learning," she said. In addition to working well with others, IT professionals have to be able to set priorities for their daily and weekly to-do lists without extensive guidance from the boss. Jon Knisley, principal of automation and process excellence at FortressIQ, said employees also should be able to think critically and act. "With more agile and collaborative work styles, employees need to execute with less guidance from management," he said. "The ability to conduct objective analysis and evaluate an issue in order to form a judgement is paramount in today's environment." Carruthers said technical skills and prior experience are good, but transferable skills are ideal. "Transferable skills showcase problem-solving ability, versatility and adaptability—common traits in successful leaders and essential elements for career development," she said.


4 Innovative Ways Cyberattackers Hunt for Security Bugs

A more time-consuming and less satisfying tactic to find bugs is fuzzing. I was once tasked with breaking into a company, so I started at a relatively simple place — its employee login page. I began blindly prodding, entering ‘a’ as the username, and getting my access denied. I typed two a’s… access denied again. Then I tried typing 1000 a’s, and the portal stopped talking to me. A minute later, the system came back online and I immediately tried again. As soon as the login portal went offline, I knew I found a bug. Fuzzing may seem like an easy path to finding every exploit on a network, but for attackers, it’s a tactic that rarely works on its own. And if an attacker fuzzes against a live system, they’ll almost certainly tip off a system admin. I prefer what I call spear-fuzzing: Supplementing the process with a human research element. Using real-world knowledge to narrow the attack surface and identify where to dig saves a good deal of time. Defenders are constantly focused on making intrusion more difficult for attackers, but hackers simply don’t think like defenders. Hackers are bound to the personal cost of time and effort, but not to corporate policy or tooling.


7 Things Great Leaders Do Every Day

A leader needs to inspire takeaways, which will bring value to-and-for the team. Consistency in success relies on having all able hands on deck, working together and with mutual understanding, to make for the steadiest ship. If you're trying to build better structure within mid-sized or larger organizations, the Leader should consider delegating the sharing of information amongst department/division heads and allow for them to disseminate the state of things to their reports. Choosing one-on-ones, senior staff huddles, and/or both (depending on what needs to be accomplished) are good ways to ensure this process smoothly moves forward. These should not substitute for any regularly scheduled staff meetings, which should be conducted at the frequency and manner that most makes sense for your organizational environment, sector, and company size. In turn, communicating the state of things to your department/division heads will task and empower them to take progressive roles in having ownership of communications relevant to their department/division while being “in the know” on the overall macro level.


Rearchitecting for MicroServices: Featuring Windows & Linux Containers

First, let’s recap the definition of what a container is – a container is not a real thing. It’s not. It’s an application delivery mechanism with process isolation. In fact, in other videos I have made on YouTube, I compare how a container is similar to a waffle, or even a glass of whiskey. If you’re new to containers, I highly recommend checking out my “Getting Started with Docker” video series available here. Second, let’s simplify what a Dockerfile actually is – the TL;DR is it’s an instruction manual for the steps you need to either simply run, or build and run your application. That’s it. At its most basic level, it’s just a set of instructions for your app to run, which can include the ports it needs, the environment variables it can consume, the build arguments you can pass, and the working directories you will need. Now, since a container’s sole goal is to deliver your application with only the processes your application needs to run, we can take that information and begin to think about our existing application architecture. In the case of Mercury Health, and many similar customers who are planning their migration path from on-prem to the cloud, we have a legacy application that is not currently architected for cross platform support – I.E. it only runs on Windows.


How to Change Gender Disparity Among Data Science Roles

There are times that I see job reqs and I’ll see recruiters come back saying they’re not finding that type of candidate -- that it doesn’t exist. I’m pretty convinced that the way the job requisitions are written they are inherently attracting individuals that may feel more confident. There’s a ton of data around the idea that individuals that identify as female are far less likely to apply to a role if they don’t tick every single box whereas their male counterparts, if they check a third or less, will be bold and apply. I think we need to do a better job at writing job descriptions that are inclusive. If there’s roles that you foresee your organization is going to need filled in AI, robotics, or edge computing -- some of the things that are tip of the spear -- the whole market is stripped out irrespective of what gender or background you may have. That is a leading indicator that an investment needs to be made. Whether that’s investing in junior practitioners, or creating alliances and relationships with local colleges and universities, or being more creative about how you curate your class of interns so they have time to ramp up, you’ve got to handle both sides of it.


Cyber attackers rarely get caught – businesses must be resilient

Hackers are increasingly targeting SMBs as, to them, it’s easy money: the smaller the business is, the less likely it is to have adequate cyber defences. Even larger SMBs typically don’t have the budgets or resources for dedicated security teams or state-of-the-art threat prevention or protection. Ransomware, for instance, is one of the biggest threats companies are facing today. While we saw the volume of ransomware attacks decline last year, this was only because ransomware has become more targeted, better implemented, and much more ruthless, with criminals specifically targeting higher value and weaker targets. One of the most interesting – and concerning – findings from our report, “The Hidden Cost of Malware”, was that the businesses had become preferred targets because they can and will pay more to get their data back. About of quarter of companies in our survey were asked to pay between $11,000 and $50,000, and almost 35% were asked to pay between $51,000 and $100,000. In fact, ransomware has become so lucrative and popular that it’s now available as a “starter kit” on the dark web. This now means that novice cyber criminals can build automated campaigns to target businesses of any size.


How to Secure Employees' Home Wi-Fi Networks

A major security risk associated with remote work is wardriving: stealing Wi-Fi credentials from unsecured networks while driving past people's homes and offices. Once the hacker steals the Wi-Fi password, they move on to spoofing the network's Address Resolution Protocol (ARP). Next, the network's traffic is sent to the hacker, and that person is fully equipped to access corporate data and wreak havoc. A typical home-office router is set up with WPA2-PSK (Wi-Fi Protected Access 2 Pre-Shared Key), a type of network protected with a single password shared between all users and devices. Unfortunately, WPA2-PSK is by far the most common authentication mechanism used in homes, which puts employees at risk for over-the-air credential theft. WPA2-PSK does have a saving grace, which is that the passwords must be decrypted once stolen. Password encryption can prevent hackers from stealing passwords once they have them, but only if they are unique, complex, and of adequate length. Avast conducted a study of 2,000 households that found 79% of homes employed weak Wi-Fi passwords. 


Solve evolving enterprise issues with GRC technology

The key challenges organizations face in fulfilling regulator requests is keeping business data up to date. Organizations of all sizes are working to reduce the delay between distributing a risk assessment, receiving responses, understanding their risk insights, and making risk-based decisions. The insights an organization receives from this work can lose value over time if the data isn’t kept up-to-date and monitored for compliance. By leveraging data classification methods and risk formulas, organizations can reduce lag time, gain real time risk insights and standardize risk at scale. OneTrust GRC provides workflows to find, collect, document and classify data in real-time to gain meaningful risk insights and support compliance. ... What sets our GRC solution apart is that it is integrated into the entire OneTrust platform of trust. Trust differentiates as a business outcome, not simply a compliance exercise. Companies nowneed to mature beyond the tactical governance tools of the past and into a modern platform with centralized workflows that bring together all the elements of trust: privacy, data governance, ethics and compliance, GRC, third-party risk, and ESG. OneTrust does just that.


Indestructible Storage in the Cloud with Apache Bookkeeper

After researching what open source had to offer, we settled upon two finalists: Ceph and Apache BookKeeper. With the requirement that the system be available to our customers, scale to massive levels and also be consistent as a source of truth, we needed to ensure that the system can satisfy aspects of the CAP Theorem for our use case. Let’s take a bird’s-eye view of where BookKeeper and Ceph stand in regard to the CAP Theorem (Consistency, Availability and Partition Tolerance) and our unique requirements. While Ceph provided Consistency and Partition Tolerance, the read path can provide Availability and Partition Tolerance with unreliable reads. There’s still a lot of work required to make the write path provide Availability and Partition Tolerance. We also had to keep in mind the immutable data requirement for our deployments. We determined Apache BookKeeper to be the clear choice for our use case. It’s close to being the CAP system we require because of its append only/immutable data store design and a highly replicated distributed log.


Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - April 28, 2021

The Rise of Cognitive AI

There is a strong push for AI to reach into the realm of human-like understanding. Leaning on the paradigm defined by Daniel Kahneman in his book, Thinking, Fast and Slow, Yoshua Bengio equates the capabilities of contemporary DL to what he characterizes as “System 1” — intuitive, fast, unconscious, habitual, and largely resolved. In contrast, he stipulates that the next challenge for AI systems lies in implementing the capabilities of “System 2” — slow, logical, sequential, conscious, and algorithmic, such as the capabilities needed in planning and reasoning. In a similar fashion, Francois Chollet describes an emergent new phase in the progression of AI capabilities based on broad generalization (“Flexible AI”), capable of adaptation to unknown unknowns within a broad domain. Both these characterizations align with DARPA’s Third Wave of AI, characterized by contextual adaptation, abstraction, reasoning, and explainability, with systems constructing contextual explanatory models for classes of real-world phenomena. These competencies cannot be addressed just by playing back past experiences. One possible path to achieve these competencies is through the integration of DL with symbolic reasoning and deep knowledge.


Singapore puts budget focus on transformation, innovation

Plans are also underway to enhance the Open Innovation Platform with new features to link up companies and government agencies with relevant technology providers to resolve their business challenges. A cloud-based digital bench, for instance, would help facilitate virtual prototyping and testing, Heng said. The Open Innovation Platform also offers co-funding support for prototyping and deployment, he added. The Building and Construction Authority, for example, was matched with three technology providers -- TraceSafe, TagBox, and Nervotec -- to develop tools to enable the safe reopening of worksites. These include real-time systems that have enabled construction site owners to conduct COVID-19 contact tracing and health monitoring of their employees. Enhancements would alsobe made for the Global Innovation Alliance, which was introduced in 2017 to facilitate cross-border partnerships between Singapore and global innovation hubs. Since its launch, more than 650 students and 780 Singapore businesses had participated in innovation launchpads overseas, of which 40% were in Southeast Asia, according to Heng.


Machine learning security vulnerabilities are a growing threat to the web, report highlights

Most machine learning algorithms require large sets of labeled data to train models. In many cases, instead of going through the effort of creating their own datasets, machine learning developers search and download datasets published on GitHub, Kaggle, or other web platforms. Eugene Neelou, co-founder and CTO of Adversa, warned about potential vulnerabilities in these datasets that can lead to data poisoning attacks. “Poisoning data with maliciously crafted data samples may make AI models learn those data entries during training, thus learning malicious triggers,” Neelou told The Daily Swig. “The model will behave as intended in normal conditions, but malicious actors may call those hidden triggers during attacks.” Neelou also warned about trojan attacks, where adversaries distribute contaminated models on web platforms. “Instead of poisoning data, attackers have control over the AI model internal parameters,” Neelou said. “They could train/customize and distribute their infected models via GitHub or model platforms/marketplaces.”


Demystifying the Transition to Microservices

The very first step you should be taking is to embrace container technology. The biggest difference between a service-oriented architecture and a microservice-oriented architecture is that in the second one, the deployment is so complex, there are so many pieces with independent lifecycles, and each piece needs to have some custom configuration that it can no longer be managed manually. In a service-oriented architecture, with a handful of monolithic applications, the infrastructure team can still treat each of them as a separate application and manage them individually in terms of the release process, monitoring, health check, configuration, etc. With microservices, this is not possible with a reasonable cost. There will eventually be hundreds of different 'applications,' each of them with its own release cycle, health check, configuration, etc., so their lifecycle has to be managed automatically. There may be other technologies to do so, but microservices have become almost a synonym of containers. Not only Docker containers manually started, but you will also need an orchestrator. Kubernetes or Docker Swarm are the most popular ones.


Ransomware: don’t expect a full recovery, however much you pay

Remember also that an additional “promise” you are paying for in many contemporary ransomware attacks is that the criminals will permanently and irrevocably delete any and all of the files they stole from your network while the attack was underway. You’re not only paying for a positive, namely that the crooks will restore your files, but also for a negative, namely that the crooks won’t leak them to anyone else. And unlike the “how much did you get back” figure, which can be measured objectively simply by running the decryption program offline and seeing which files get recovered, you have absolutely no way of measuring how properly your already-stolen data has been deleted, if indeed the criminals have deleted it at all. Indeed, many ransomware gangs handle the data stealing side of their attacks by running a series of upload scripts that copy your precious files to an online file-locker service, using an account that they created for the purpose. Even if they insist that they deleted the account after receiving your money, how can you ever tell who else acquired the password to that file locker account while your files were up there?


Linux Kernel Bug Opens Door to Wider Cyberattacks

Proc is a special, pseudo-filesystem in Unix-like operating systems that is used for dynamically accessing process data held in the kernel. It presents information about processes and other system information in a hierarchical file-like structure. For instance, it contains /proc/[pid] subdirectories, each of which contains files and subdirectories exposing information about specific processes, readable by using the corresponding process ID. In the case of the “syscall” file, it’s a legitimate Linux operating system file that contains logs of system calls used by the kernel. An attacker could exploit the vulnerability by reading /proc/<pid>/syscall. “We can see the output on any given Linux system whose kernel was configured with CONFIG_HAVE_ARCH_TRACEHOOK,” according to Cisco’s bug report, publicly disclosed on Tuesday.. “This file exposes the system call number and argument registers for the system call currently being executed by the process, followed by the values of the stack pointer and program counter registers,” explained the firm. “The values of all six argument registers are exposed, although most system call use fewer registers.”


Process Mining – A New Stream Of Data Science Empowering Businesses

It is needless to emphasise that Data is the new Oil, as Data has shown us time on time that, without it, businesses cannot run now. We need to embrace not just the importance but sheer need of Data these days. Every business runs the onset of processes designed and defined to make everything function smoothly, which is achieved through – Business Processes Management. Each Business Process has three main pillars – Business Steps, Goals and Stakeholders, where series of Steps are performed by certain Stakeholders to achieve a concrete goal. And, as we move into the future where the entire businesses are driven by Data Value Chain which supports the Decision Systems, we cannot ignore the usefulness of Data Science combined with Business Process Management. And this new stream of data science is called Process Mining. As quoted by Celonis, a world-leading Process Mining Platform provider, that; “Process mining is an analytical discipline for discovering, monitoring, and improving processes as they actually are (not as you think they might be), by extracting knowledge from event logs readily available in today’s information systems.


Alexandria in Microsoft Viva Topics: from big data to big knowledge

Project Alexandria is a research project within Microsoft Research Cambridge dedicated to discovering entities, or topics of information, and their associated properties from unstructured documents. This research lab has studied knowledge mining research for over a decade, using the probabilistic programming framework Infer.NET. Project Alexandria was established seven years ago to build on Infer.NET and retrieve facts, schemas, and entities from unstructured data sources while adhering to Microsoft’s robust privacy standards. The goal of the project is to construct a full knowledge base from a set of documents, entirely automatically. The Alexandria research team is uniquely positioned to make direct contributions to new Microsoft products. Alexandria technology plays a central role in the recently announced Microsoft Viva Topics, an AI product that automatically organizes large amounts of content and expertise, making it easier for people to find information and act on it. Specifically, the Alexandria team is responsible for identifying topics and rich metadata, and combining other innovative Microsoft knowledge mining technologies to enhance the end user experience.


How Vodafone Greece Built 80 Java Microservices in Quarkus

The company now has 80 Quarkus microservices running in production with another 50-60 Spring microservices remaining in maintenance mode and awaiting a business motive to update. Vodafone Greece’s success wasn’t just because of Sotiriou’s technology choices — he also cited organizational transitions the company made to encourage collaboration. “There is also a very human aspect in this. It was a risk, and we knew it was a risk. There was a lot of trust required for the team, and such a big amount of trust percolated into organizing a small team around the infrastructure that would later become the shared libraries or common libraries. When we decided to do the migration, the most important thing was not to break the business continuity. The second most important thing was that if we wanted to be efficient long term, we’d have to invest in development and research. We wouldn’t be able to do that if we didn’t follow a code to invest part of our time into expanding our server infrastructure,” said Sotiriou. That was extra important for a team that scaled from two to 40 in just under three years.


The next big thing in cloud computing? Shh… It’s confidential

The confidential cloud employs these technologies to establish a secure and impenetrable cryptographic perimeter that seamlessly extends from a hardware root of trust to protect data in use, at rest, and in motion. Unlike the traditional layered security approaches that place barriers between data and bad actors or standalone encryption for storage or communication, the confidential cloud delivers strong data protection that is inseparable from the data itself. This in turn eliminates the need for traditional perimeter security layers, while putting data owners in exclusive control wherever their data is stored, transmitted, or used. The resulting confidential cloud is similar in concept to network micro-segmentation and resource virtualization. But instead of isolating and controlling only network communications, the confidential cloud extends data encryption and resource isolation across all of the fundamental elements of IT, compute, storage, and communications. The confidential cloud brings together everything needed to confidentially run any workload in a trusted environment isolated from CloudOps insiders, malicious software, or would-be attackers.



Quote for the day:

"Lead, follow, or get out of the way." -- Laurence J. Peter