Daily Tech Digest - July 06, 2021

The future of deep learning, according to its pioneers

“Humans and animals seem to be able to learn massive amounts of background knowledge about the world, largely by observation, in a task-independent manner,” Bengio, Hinton, and LeCun write in their paper. “This knowledge underpins common sense and allows humans to learn complex tasks, such as driving, with just a few hours of practice.” Elsewhere in the paper, the scientists note, “[H]umans can generalize in a way that is different and more powerful than ordinary iid generalization: we can correctly interpret novel combinations of existing concepts, even if those combinations are extremely unlikely under our training distribution, so long as they respect high-level syntactic and semantic patterns we have already learned.” Scientists provide various solutions to close the gap between AI and human intelligence. One approach that has been widely discussed in the past few years is hybrid artificial intelligence that combines neural networks with classical symbolic systems. Symbol manipulation is a very important part of humans’ ability to reason about the world. It is also one of the great challenges of deep learning systems. Bengio, Hinton, and LeCun do not believe in mixing neural networks and symbolic AI. 


Machine Learning for Performance Management

Like whether they are likely to finish on time, or be asked to do overtime. However, again as humans, we can only process a handful of variables at any one time and we base our predictions on our past experiences. As none of us can work 24/7 the predictions of one person will likely be different to that of another. When you consider other factors such as people, differing operating procedures, machine health, raw material variability, storage and movement conditions and environmental changes such as weather, the number of variables grows and human-predictability begins to drop off. This is where the reliance on gut decisions begins to increase. Gut decisions are those where we cannot easily explain the rationale. Gut decisions are still based on experience and in fact, may be the result of combining a lot of inputs and experiences subconsciously and creating a best guess. They are not the same as a lucky guess. Therefore, you will likely find in a really experienced operator, that these gut decisions are actually pretty good. Unfortunately, the experienced workers are becoming scarce and the ones we do have are far too useful to be staring at trends all day!


How Business Leaders Can Foster an Innovative Work Culture

To cultivate a culture of innovation, you must encourage action on creative ideas. Let your employees feel valued, like they have some autonomy in the idea creation process. They should be able to feel safe to share bold or crazy ideas that come to their mind. Trust your team to find new ways to solve problems. If you’ve never failed, you’ve never taken chances. Taking risks is a big part of innovation. You have to remind your employees that failure is inevitable and every idea has a degree of uncertainty, and you can do this by creating a safe environment where you encourage your team to test their innovative ideas and even make mistakes that do not cost the company a huge fortune. The important thing is to learn from your mistakes to ensure that you don’t fail the same way twice. If you hold back on ideas because of the fear of failing, you’ll stay confined to the monotony of the status quo and your business will never make any significant leaps. The important thing to remember is to recover and try again. You can hold pitching contests for your employees and develop new ideas that they will be asked to present in front of management. 


An Introduction to Machine Learning Engineering for Production/MLOps — Phases in MLOps

It is common knowledge that data rules the AI world, pretty much. Our models, at least in the case of supervised learning, are only as good as our data. It is important, especially when working in a team, to be on the same page with regards to the data you have. Consider the same handwriting recognition task that we defined earlier. Suppose you and your team decide to discard poorly clicked images for the time being. Now, what is a poorly clicked image? It might be different for your teammate and it might be different for you. In such cases, it is important to establish a set of rules to define what a poorly clicked image is. Maybe if you struggle to read more than 5 words on the page, you decide to discard it. Something of that sort. This is an extremely important step even in research as having ambiguity in data and labels will only lead to more confusion for the model. Another important thing to be taken into consideration is the type of data you are dealing with, i.e, structured or unstructured. How you work with the data you have largely depends on this aspect. Unstructured data includes images, audio signals, etc and you can carry out data augmentation in these cases to increase the size of your dataset.


Data Scientists and ML Engineers Are Luxury Employees

Apart from the interest in the field, another main reason is a bit more practical. I have spent so much time and energy learning the necessary topics (think probability, statistics, calculus, linear algebra, distributed computing, machine learning, deep learning…) that I want this knowledge to stick in. And we are all humans. Even if you are a genius, if you don’t practice what you learn, the knowledge goes away. So when your boss asks you (for the tenth time in a row) to create a piece of software or an analysis that has nothing to do with machine learning, what is that you think? Are you happy? Another important factor is that the field is moving at lightning speed. It was already the case when I was in software engineering, but now it is not even comparable. Not a day goes by without hearing from the latest breakthrough, the newest shiny deep learning architecture, this great new book that every ML practitioner should read, etc. When you are not practicing ML in your day job, you are left with practicing it during your free time. It is OK for a little while, but it is not sustainable in the long run. We are all humans. We need time off to relax and be with our loved ones. Don’t get me wrong. I love learning new things. 


Neo’s Governance Model Projected to Transform Blockchain Space

From an architectural perspective, Neo N3 has also optimized to deliver a more streamlined user experience, including switching from a UTXO to a pure account model, reconfiguring the virtual machine, adding a state root service, upgrading block synchronization mechanisms, and introducing new data compression mechanisms. Since The release of the Neo N3 TestNet, performance is already up by approximately 50 times, and the MainNet is set to launch soon in the near future. ... Under POW consensus governance models, arithmetic power is the right, and all the newly generated revenue is owned by nodes who maintain a monopoly over arithmetic power. Meanwhile, POS consensus models primarily distribute tokens to those who hold the most money — thus, distribution of benefits under both systems is far from equitable. In addition, POW and POS models require users to pay high processing fees for transferring transactions and using on-chain applications. As a result, platforms such as Ether and EOS have been plagued by high fees, resulting in transaction congestion along with GAS fees worth hundreds of dollars on Ether.


Microsoft Power Platform and low code/no code development: Getting the most out of Fusion Teams

One aspect of the Fusion Teams approach are new tools for professional developers and IT pros, including integration with both Visual Studio and Visual Studio Code. At the heart of this side of the Teams development model is the new PowerFX language, which builds on Excel's formula language and blends in a SQL-like query language. PowerFX lets you export both Power Apps designs and formulas as code, ready for use in existing code repositories, so IT teams can manage Power Platform user interfaces alongside their line-of-business applications. Microsoft has delivered a new Power Platform command line tool, which can be used from the Windows Terminal or from the terminals in its development platforms. The Power Platform CLI can be used to package apps ready for use, as well as to extract code for testing. One advantage of this approach is that a user building their own app in Power Apps can pass it over to a database developer to help with query design. Code can be edited in, say, Visual Studio Code, before being handed back with a ready-to-use query. Fusion teams aren't about forcing everyone into using a lowest common denominator set of tools; they're about building and sharing code in the tools you use the most.


The encouraging acceleration of cloud adoption in financial services

When regulations are constantly evolving, in multiple jurisdictions, a cloud-based approach to CLM is much more agile and adaptable to emerging challenges. Using a system that can be updated to always be compliant, provides risk management teams and ultimately the C-suite and board with the confidence that they are future proofed against evolving regulation and will avoid hefty financial penalties from regulators. ... Transformation plans rarely, if ever, begin and end in any one CIO’s tenure – they are a continual process to move things forward for the organisation – but the efforts of individual leaders need to pave the way for the next without tying their hands and forcing them down a path that may present issues later down the line. ... Whether banks are just looking to digitise existing processes or to use AI and ML to make more intelligent decisions and look for fraudulent behavioural patterns, the fact that more conversations are being had in the financial service world about cloud, or that these conversations are going somewhere, gives me confidence that we’re moving in the right direction and there are good days to come.


The chip shortage is real, but driven by more than COVID

The problem is that demand is so great that existing production capacity can’t keep up. Before there was COVID, digital transformation was driving sales. “There was a pretty large movement in the enterprise towards more digitalization across different sectors of the markets in different verticals,” said Morales. “I think the pandemic only accelerated that,” he said. “All of the connected everythings--smart cities, smart roadway, smart campuses, smart airports, smart, autonomous everything--I think this [shortage] was going to happen anyway, it just happened faster,” said Fenn. Another problem facing chip makers is that demand for processors was across-the-board, much of it for older technology that isn’t the first choice for what the vendors would like to sell. Intel, TSMC, GlobalFoundries, Samsung, and other advanced chip makers are pushing into 7nm and 5nm designs that smart refrigerators and cars don’t need. They do fine with 40nm or 28nm designs, and no one is investing in more fabs to make more. So the existing older fabs will continue to run at full capacity for the foreseeable future, with no room for error and no plans to build more.


Easy Guide to Remote Pair Programming

Solitary programmers who feel well programming alone and are efficient shouldn’t be forced to pair program. There are so many reasons why one would like to work alone, and not in a pair. We can think Think about people who are very introverted, deep experts in a difficult domain, or people who aren’t used to collaborating with other people. No practice should be forced on anyone, but rather explained, slowly introduced; we need to know and accept that some people won’t like it, and won’t use it. Another situation when (remote) pair programming doesn’t work is when there is a strong push against collaboration in the whole organization. The management can instill these values that we need to work on individually; everyone needs to be evaluated for their own individual work as otherwise evaluation will be very difficult. There can be many situations where accountancy, evaluation and task-keeping needs to be written according to the particular rules of the organization. Pair programming won’t work in this environment. There are also organizations where there are strong silos, and you might be able to work in a pair in your own narrow specialization, but never with other specialization.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell

Daily Tech Digest - July 05, 2021

Era of Quantum Computers

Quantum computers will disrupt almost every industry and could contribute greatly in the fields of finance, military affairs, intelligence, environment, deep-space exploration, drug design and discovery, aerospace engineering, utilities like nuclear fusion, polymer design, artificial intelligence, big data search, and digital manufacturing. Quantum computers will not only solve all of life’s most complex problems and mysteries, but will soon empower all A.I. systems, acting as the brains of these super-human machines. Teachers can use quantum computing as an object lesson to introduce high-level concepts e.g., the physics behind quantum machines offers avenue of exploration. Quantum computers will personalize higher education. The power and speed of quantum computing may best serve the individualized needs of the students in visualizing adaptive learning models. It constrains the space to make it more understandable and provides theoretical concepts a practical application. In broader picture, quantum computing will raise the bar in digital literacy. For students, quantum technologies are their future and they must have an early understanding of the fundamentals.


Role of Continuous Monitoring in DevOps Pipeline

It is an automated process that helps DevOps teams in the early detection of compliance issues that occur at different stages of the DevOps process. As the number of applications deployed on the cloud grows, the IT security team must adopt various security software solutions to mitigate the security threats while maintaining privacy and security. Continuous Monitoring in DevOps is also called Continuous Control Monitoring (CCM). It is not restricted to just DevOps but also covers any area that requires attention. It provides necessary data sufficient to make decisions by enabling easy tracking and rapid error detection. It provides feedback on things going wrong, allowing teams to analyze and take timely actions to rectify problematic areas. It is easily achievable using good Continuous Monitoring tools that are flexible across different environments – whether on-premise, in the cloud, or across containerized ecosystems – to watch over every system all the time. At the time of the production release of the software product, Continuous Monitoring notifies the Quality analysts about any concerns arising in the production environment.


Why data is the real differentiator in D2C retail

Data fabrics offer organisations, both within and outside the retail sector, centralised access and a single, unified view of data across their entire enterprise. This can be taken one step further with the use of ‘smart’ data fabrics, which embed a wide range of analytics capabilities, making it faster and easier for brands and retailers to gain new insights and power intelligent predictive and prescriptive services and applications. For retail organisations reluctant to replace siloed systems due to the expectation that the cost would be prohibitive, smart data fabrics mark a way for them to continue to leverage their existing investments by allowing existing legacy applications and data to remain in place. This means enterprises can bridge legacy and modern infrastructure without having to “rip-and-replace” any of their existing technology. When it comes to adopting a D2C model, this approach will allow brands and retailers to harness data from across their different channels to better understand their customers. This will empower them to provide the right types of experiences and interactions and to gain a more informed understanding of the types of products their customers desire, for example.


How Outsourcing Practices Are Changing in 2021: an Industry Insight

The tech ecosystem had already embraced the Fourth Industrial Revolution in terms of advancing technologies. But the outsourcing community was still a step behind. It still relied on humans for the majority of work. As the pandemic ushered in the future of work, outsourcing changed. A new digital outsourcing model emerged to help outsourcing approaches be at par with the Fourth Industrial Revolution. As the majority of businesses have embraced the technology revolution, outsourcers are also gearing up for the same. These technologies in outsourcing will enable both parties to become more flexible, resilient, efficient, and productive while driving stable revenue. More organizations are strategically incorporating these evolving technologies into their policies in the coming times. ... Businesses are now looking forward to more sustainable practices in outsourcing to continue having a long-term relationship. The pandemic forced businesses to revoke their outsourcing contracts with companies mostly because they couldn’t trust their project during uncertain times. 


How AI is helping enterprises turn the tables on malicious attacks

The major benefit of AI security tools is how they can address the needle in the haystack problem, Kler says. Humans cannot handle the proliferation of data points and the massive amounts of data pouring into the system, but AI is very good at identifying, filtering, and prioritizing threat warnings. “It replaces the two overwhelmed SIEM guys trying to filter the millions of alerts in your SOC center,” Kler says. “AI can prioritize and correlate alerts, then direct your attention to the next urgent task.” In the future, AI will also help us in threat hunting in the network, uncovering fine correlations and statistical anomalies to highlight them for security teams. AI can also be used for overall threat intelligence, predicting when, where, and what kind of attacks your organization might be facing next — predictive maintenance, in other words, to determine what’s going to go wrong next. For instance, if attacks on medical facilities ramp up, it can warn you that your own medical facility is now at increased risk. But remember that AI is not a silver bullet that’s going to solve every security issue, Kler says.


McKinsey: These are the skills you will need for the future of work

Our research suggests governments could consider reviewing and updating curricula to focus more strongly on the DELTAs. Given the weak correlation between proficiency in self-leadership and interpersonal DELTAs and higher levels of education, a strong curricula focus on these soft skills may be appropriate. Governments could also consider leading further research. Many governments and academics have started to define the taxonomies of the skills citizens will require, but few have done so at the level described here. Moreover, few, if any, have undertaken the considerable amount of research required to identify how best to develop and assess such skills. For instance, for each DELTA within the curriculum, research would be required to define progression and proficiency levels achievable at different ages and to design and test developmental strategies and assessment models. The solutions for different DELTAs are likely to differ widely. For example, the solutions to develop and assess “self-awareness and self-management” would differ from those required for “work-plan development or “data analysis.” 


Beginner’s Guide To Lucid: A Network For Visualizing Neural Networks

Lucid is a library that provides a collection of infrastructure and tools to help research neural networks and understand how neural networks make interpretations and decisions based on the input. It is a step up from DeepDream and provides flexible abstractions so that it can be used for a wide range of interpretability research. Lucid helps us know the how and why of a given prediction. This makes the end-user understand the reasons for the occurrence of such. There is a growing keen interest that neural networks need to be interpretable to humans for research purposes and better understanding. The field of neural network interpretability has formed to help with these concerns. Lucid makes use of convolutional neural networks, which have many convolutional layers. At first glance, the early layers look for basic lines and simple shapes and patterns from the input image. The results from this layer keep propagating forward and further respond to more understandable inputs; this information then goes forward to generate the output from the final layers.


4 ways the coder community can help fix its diversity problem

Open source, by design, welcomes diversity because anyone can contribute to software code from anywhere in the world. Teams are often geographically distributed, which leads to more diversity, and that correlates with positive results to team output, research shows. We witnessed open source’s diversity-powered resilience in action last year. As the pandemic bore down, GitHub, the largest open source developer platform with more than 50 million developers, found the developer activity remained consistent—or even increased. If the pandemic reduced developer activity in one region more than another, at one time or another, the geographic diversity of the community may have mitigated the impact. To some extent, that happens every year as different regions go more quiet than others for holidays, such as Christmas in the Western world and Lunar New Year in China. In the past three decades, open source has moved from the fringe of software development to the core, and it has transformed how software is built and made.


What really is consumable analytics?

Put simply, consumable analytics visualises data. It brings together vast amounts of information and presents it in a straightforward and easy-to-understand format, so that as the user navigates the business system, they are exposed to the patterns and trends they need without having to manually search for that data. Every record becomes a dashboard that can be easily interpreted by the user, alerting teams to key data insights in real time and allowing them to take appropriate action quickly. Let’s take a change in total monthly revenue. This could indicate a variety of issues, such as inaccurate forecasting or a poor sales period, much in the same way that a sharp increase in customer help desk requests could indicate a faulty product line or technical problems online. This kind of information would take considerable time and man-power and can easily be caught too late if there is not a specialist team consistently monitoring these reports. Consumable analytics flag these changes as they happen, saving time and resources to identify the problem and focus on a solution. 


CISA Emphasizes Urgency of Avoiding 'Bad' Security Practices

The continued use of outdated and unsupported hardware is a long-standing cybersecurity problem, says Erich Kron, a former security manager for the U.S. Army’s 2nd Regional Cyber Center. "End-of-life and old software often lacks the ability to be patched, leaving known vulnerabilities for attackers to exploit," he says. "Hard-coded passwords, or the inability to handle complex or secure passwords, is a significant risk in both the private and public sectors." Kron, a security awareness advocate for the security firm KnowBe4, adds that the bad practices catalog from CISA "makes for good overall guidance for improvements in cyber hygiene. There is power in the government setting the example for the private sector by bringing light to these bad practices." Frank Downs, a former U.S. National Security Agency offensive analyst, offers a similar perspective. "This collection of practices can act as a single point of truth for the field … a universal touchstone that can provide a baseline for all organizations. 



Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad

Daily Tech Digest - July 04, 2021

The importance of Robotic Process Automation

Michael believes that RPA will grow in importance in the future for a number of reasons. Firstly, understanding. It’s no longer an unknown technology. So many large organizations have Digital Workforces and so the worry and uncertainty around them have gone. Secondly, there is a real drive to add ‘Intelligent’ ahead of ‘Automation’. Whilst we aren’t quite at the widespread adoption of ‘intelligent Automation’ just yet, these cognitive elements are getting better and more available each week. Once we have more use cases then we will see the early adopters of RPA start to take the next step and begin to ‘add the human back into the robot’. Thirdly – the net cost of RPA is decreasing. There are now community versions available free of charge, additional software given as part of the platforms, and training available for free. The barriers to entry are disappearing Furthermore, Mahesh highlights that the global pandemic and the economic crisis has put a lot of organizations in a state of flux, made them change business processes, and has also highlighted the need for more automation through RPA.


How AI Is Changing The Real Estate Landscape

AI has applications in estimating the market value of properties and predicting their future price trajectory. For example, ML algorithms combine current market data and public information such as mobility metrics, crime rates, schools, and buying trends to arrive at the best pricing strategy. The AI uses a regression algorithm– accounting for property features such as size, number of rooms, property age, home quality characteristics, and macroeconomic demographics–to calculate the best price range. To wit, the AI algorithms can predict the prices based on the geographic location or future development. Online real estate marketplace Zillow puts out home valuations for 104 million homes across the US. The company, founded by former Microsoft executives, uses cutting edge statistical and machine learning models to vet hundreds of data points for individual homes. Zillow employs a neural network-based model to extract insights from huge swathes of data and tax assessor records and direct feeds from hundreds of multiple listing services and brokerages.


Quantum Computing just got desktop sized

Quantum computing is coming on leaps and bounds. Now there’s an operating system available on a chip thanks to a Cambridge University-led consortia with a vision is make quantum computers as transparent and well known as RaspberryPi. This “sensational breakthrough” is likened by the Cambridge Independent Press to the moment during the 1960s when computers shrunk from being room-sized to being sat on top of a desk. Around 50 quantum computers have been built to date, and they all use different software – there is no quantum equivalent of Windows, IOS or Linux. The new project will deliver an OS that allows the same quantum software to run on different types of quantum computing hardware. The system, Deltaflow.OS (full name Deltaflow-on-ARTIQ) has been designed by Cambridge Uni startup Riverlane. It runs on a chip developed by consortium member SEEQC using a fraction of the space necessary in previous hardware. SEEQC is headquartered in the US with a major R&D site in the UK. “In its most simple terms, we have put something that once filled a room onto a chip the size of a coin, and it works,” said Dr. Matthew Hutchings.


This Week in Programming: GitHub Copilot, Copyright Infringement and Open Source Licensing

On the idea of copyright infringement, Guadamuz first points to a research paper by Alber Ziegler published by GitHub, which looks at situations where Copilot reproduces exact texts, and finds those instances to be exceedingly rare. In the original paper, Ziegler notes that “when a suggestion contains snippets copied from the training set, the UI should simply tell you where it’s quoted from,” as a solution against infringement claims. On the idea of the GPL license and “derivative” works, Guadamuz again disagrees, arguing that the issue at hand comes down to how the GPL defines modified works, and that “derivation, modification, or adaptation (depending on your jurisdiction) has a specific meaning within the law and the license.” “You only need to comply with the license if you modify the work, and this is done only if your code is based on the original to the extent that it would require a copyright permission, otherwise it would not require a license,” writes Guadamuz. “As I have explained, I find it extremely unlikely that similar code copied in this manner would meet the threshold of copyright infringement, there is not enough code copied...”


Django Vs Express: The Key Differences To Observe in 2021

Django is an Python framework that provides rapid development. It has a pragmatic and clean design. It is recognized for having a ‘batteries included’ viewpoint, hence it is ready to be utilized. Here are some of the vital features of Django: Django takes care of content management, user authentication, site maps, and RSS feeds effectively; Extremely fast: This framework was planned to aid programmers to take web applications from the initial conception to project completion as rapidly as possible. ...  Express.js is a flexible and minimal Node.js web app framework that supplies a vigorous set of traits for mobile and web-based apps. With innumerable HTTP utility approaches and middleware at disposal, making a dynamic API is easy and quick. Numerous popular web frameworks are constructed on this framework. Below are some of the noteworthy features of Express.js: Middleware is a fragment of the platform that has access to the client request, database, and other such middlewares. It is primarily accountable for the organized organization of dissimilar functions of this framework; Express.js supplies several commonly utilized traits of Node.js in the kind of functions that can be freely employed anywhere in the package.


Unleashing the Power of MLOps and DataOps in Data Science

Data is overwhelming, and so is the science of mining, analyzing, and delivering it for real-time consumption. No matter how much data is good for business, it is still vulnerable to putting the privacy of millions of users at unimaginable risk. That is exactly why there is a sudden inclination towards more automated processes. In the past year, enterprises sticking to conventional analytics have realized that they will not survive any longer without a makeover. For example, enterprises are experimenting with micro-databases, each storing master data for a particular business entity only. There is also an increase in the adoption of self-servicing practices to discover, cleanse, and prepare data. They have understood the importance of embracing the ‘XOps’ mindset and delegate more important roles to MLOps and DataOps practices. Now, MLOps are important because bringing ML models to execution is more difficult than training them or deploying them as APIs. The complication further worsens in the absence of governance tools. 


TrickBot Spruces Up Its Banking Trojan Module

TrickBot is a sophisticated (and common) modular threat known for stealing credentials and delivering a range of follow-on ransomware and other malware. But it started out as a pure-play banking trojan, harvesting online banking credentials by redirecting unsuspecting users to malicious copycat websites. According to researchers at Kryptos Logic Threat Intelligence, this functionality is carried out by TrickBot’s webinject module. When victim attempts to visit a target URL (like a banking site), the TrickBot webinject package performs either a static or dynamic web injection to achieve its goal, as researchers explained: “The static inject type causes the victim to be redirected to an attacker-controlled replica of the intended destination site, where credentials can then be harvested,” they said, in a Thursday posting. “The dynamic inject type transparently forwards the server response to the TrickBot command-and-control server (C2), where the source is then modified to contain malicious components before being returned to the victim as though it came from the legitimate site.”


How a college student founded a free and open source operating system

FreeDOS was a very popular project throughout the 1990s and into the early 2000s, but the community isn’t as big these days. But it’s great that we are still an engaged and active group. If you look at the news items on our website, you’ll see we post updates on a fairly regular basis. It’s hard to estimate the size of the community. I’d say we have a few dozen members who are very active. And we have a few dozen others who reappear occasionally to post new versions of their programs. I think to maintain an active community that’s still working on an open source DOS from 1994 is a great sign. Some members have been with us from the very beginning, and I’m really thankful to count them as friends. We do video hangouts on a semi-regular basis. It’s great to finally “meet” the folks I’ve only exchanged emails with over the years. It's meetings like this when I remember open source is more than just writing code; it's about a community. And while I've always done well with our virtual community that communicates via email, I really appreciated getting to talk to people without the asynchronous delay or artificial filter of email—making that real-time connection means a lot to me.


Let Google Cloud’s predictive services autoscale your infrastructure

Predictive autoscaling uses your instance group’s CPU history to forecast future load and calculate how many VMs are needed to meet your target CPU utilization. Our machine learning adjusts the forecast based on recurring load patterns for each MIG. You can specify how far in advance you want autoscaler to create new VMs by configuring the application initialization period. For example, if your app takes 5 minutes to initialize, autoscaler will create new instances 5 minutes ahead of the anticipated load increase. This allows you to keep your CPU utilization within the target and keep your application responsive even when there’s high growth in demand. Many of our customers have different capacity needs during different times of the day or different days of the week. Our forecasting model understands weekly and daily patterns to cover for these differences. For example, if your app usually needs less capacity on the weekend our forecast will capture that. Or, if you have higher capacity needs during working hours, we also have you covered.
Why should you try it?


The IoT Cloud Market

Cloud computing and the Internet of Things (IoT) have become inseparable when one or the other is discussed and with good reason: You really can’t have IoT without the cloud. The cloud, a grander idea that stands on its own, is nonetheless integral to the IoT platform’s success. The Internet of Things is a system of unrelated computing devices, mechanical and digital machines, objects, and other devices provided with unique identifiers (an IP address) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction. Whereas the traditional internet consists of clients – PCs, tablets, and smartphones, primarily – the Internet of Things could be cars, street signs, refrigerators, or watches. Whereas traditional Internet input and interaction relies on human input, IoT is almost totally automated. Because the bulk of IoT devices are not in traditional data centers and almost all are connected wirelessly, they are reliant on the cloud for connectivity. For example, connected cars that send up terabytes of telemetry aren’t always going to be near a data center to transmit their data, so they need cloud connectivity.



Quote for the day:

"Strong convictions precede great actions." -- James Freeman Clarke

Daily Tech Digest - July 03, 2021

DeepMind AGI paper adds urgency to ethical AI

Despite assurances from stalwarts that AGI will benefit all of humanity, there are already real problems with today’s single-purpose narrow AI algorithms that calls this assumption into question. According to a Harvard Business Review story, when AI examples from predictive policing to automated credit scoring algorithms go unchecked, they represent a serious threat to our society. A recently published survey by Pew Research of technology innovators, developers, business and policy leaders, researchers, and activists reveals skepticism that ethical AI principles will be widely implemented by 2030. This is due to a widespread belief that businesses will prioritize profits and governments continue to surveil and control their populations. If it is so difficult to enable transparency, eliminate bias, and ensure the ethical use of today’s narrow AI, then the potential for unintended consequences from AGI appear astronomical. And that concern is just for the actual functioning of the AI. The political and economic impacts of AI could result in a range of possible outcomes, from a post-scarcity utopia to a feudal dystopia. It is possible too, that both extremes could co-exist.


Distributed DevOps Teams: Supporting Digitally Connected Teams

The teams using the visualization board were in different countries, so they needed to address digital connection across time zones. This meant a more robust process for things like retrospectives, more robust breakdown of stories into tasks, more "scheduled" time for showcase and issue resolution, etc. The team found that, while they worried a more defined process would stymie their agility, it worked well in focusing their activities productively in line with the broader objectives, without the necessity of being in constant communication. They found they needed more overlapping work time, particularly when they were in release planning and deployment. And they had to think about and plan task/work turnover to the other team at the end of each day – something they never had to do when in physical proximity. They’ve seen some team members fall back into role-based activities more often. There simply isn’t the natural communication and subsequent spark of curiosity that is truly the hallmark of team collaboration.


The Cost of Managed Kubernetes - A Comparison

Running a Kubernetes cluster in EKS, you get the possibility of using either a standard Ubuntu image as the OS for your nodes, or you can use their optimized EKS AMIs. This can help you get some better speed and performance rather than running a generic OS. Once the cluster is running, there’s no way to enable automatic upgrades of the Kubernetes version. While EKS does have excellent documentation on how to upgrade your cluster, it is a manual process. If your nodes start reporting failures, EKS doesn’t have a way of enabling auto-repair like in GKE. This means you’ll have to either monitor that yourself and manually fix nodes or set up your own system to repair broken nodes. As with GKE, you pay an administration fee of $0.10 per hour per cluster when running EKS, after which you only pay for the underlying resources. If you want to run your cluster on-prem, it’s possible to do so either by using AWS Outposts or EKS Anywhere, which launches sometime in 2021.


Resetting Your IoT Device Before Reselling It Isn't Enough, Researchers Find

Those that had reset their devices, however, hadn’t quite wiped the slate clean in the way they thought they had. Researchers found that, contrary to what Amazon says, you can actually recover a lot of sensitive personal data stored on factory reset devices. The reason for this is related to how these devices store your information using NAND flash memory—a storage medium that, due to certain processes, doesn’t actually delete the data when the device is reset. “We show that private information, including all previous passwords and tokens, remains on the flash memory, even after a factory reset. This is due to wear-leveling algorithms of the flash memory and lack of encryption,” researchers write. “An adversary with physical access to such devices (e.g., purchasing a used one) can retrieve sensitive information such as Wi-Fi credentials, the physical location of (previous) owners, and cyber-physical devices (e.g., cameras, door locks).” Granted, said hypothetical snoopers would really have to know what they were doing—and their data thieving would entail a certain amount of expertise.


Defeating Ransomware-as-a-Service? Think Intel-Sharing

In addition to technological solutions, a necessary element in building a strong cybersecurity foundation is working with all internal and external stakeholders, including law enforcement. More data helps enable more effective responses. Because of this, cybersecurity professionals must openly partner with global or regional law enforcement, like US-CERT. Sharing intelligence with law enforcement and other global security organizations is the only way to effectively take down cybercrime groups. Defeating a single ransomware incident at one organization does not reduce the overall impact within an industry or peer group. It’s a common practice for attackers to target multiple verticals, systems, companies, networks and software. To make it more difficult and resource-intensive for cybercriminals to attack, public and private entities must collaborate by sharing threat information and attack data. Private-public partnerships also help victims recover their encrypted data, ultimately reducing the risks and costs associated with the attack. Visibility increases as public and private entities band together.


Maintaining a Security Mindset for the Cloud Is Crucial

A lot of organizations are moving from traditional on-premises application deployments into one or multiple clouds. Now, those transitions carry with them architectural baggage of how to architect networking and security elements for this new cloud era, where applications are distributed all around in one multi-cloud, software-as-a-service environment or even edge computing environments. And so security is becoming very, very paramount to the success of that motion. Now, we also know that security attacks are becoming increasingly sophisticated, and that’s especially true when applications are moving to the cloud. And cloud infrastructure is not always to the same level of capabilities and features that enterprises have been used to in their on-premises environments. So, this security-oriented mindset is extremely important for building these networks that now span not only the on-premises environment, but also cloud environments. 


DevOps Automation: How Is Automation Applied In DevOps Practice

We can see the automation being carried out at every phase of the development starting from triggering of the build, carrying out unit testing, packaging, deploying on to the specified environments, carrying out build verification tests, smoke tests, acceptance test cases and finally deploying on to the final production environment. Even when we say automating test cases, it is not just the unit tests but installation tests, integration tests, user experience tests, UI tests etc. DevOps forces the operations team, in addition to development activities, to automate all their activities, like provisioning the servers, configuring the servers, configuring the networks, configuring firewalls, monitoring the application in the production system. Hence to answer what to automate, it is build trigger, compiling and building, deploying or installing, automating infrastructure set up as a coded script, environment configurations as a coded script, needless to mention testing, post-deployment life performance monitoring in life, logs monitoring, monitoring alerts, pushing notifications to live and getting alerts from live in case of any errors and warnings etc


Kubernetes-Run Analytics at the Edge: Postgres, Kafka, Debezium

Implementing databases and data analytics within cloud native applications involves several steps and tools from data ingestion, preliminary storage, to data preparation and storage for analytics and analysis. An open, adaptable architecture will help you execute this process more effectively. This architecture requires several key technologies. Container and Kubernetes platforms provide a consistent foundation for deploying databases, data analytics tools, and cloud native applications across infrastructure, as well as self-service capabilities for developers and integrated compute acceleration. PostgreSQL, Apache Kafka and Debezium can be deployed using Kubernetes Operators on Kubernetes to provide a cloud native data analytic solution that be can be used for a variety of use cases and across hybrid cloud environments — including datacenter, public cloud infrastructure, and the edge — for all stages of cloud native application development and deployment. 


DevOps Testing Tutorial: How DevOps Will Impact QA Testing?

Although there are subtle differences between Agile and DevOps Testing, those working with Agile will find DevOps a little more familiar to work with (and eventually adopt). While Agile principles are applied successfully in the development & QA iterations, it is a different story altogether (and often a bone of contention) on the operations side. DevOps proposes to rectify this gap. Now, instead of Continuous Integration, DevOps involves “Continuous Development”, where the code was written and committed to Version Control, will be built, deployed, tested and installed on the Production environment that is ready to be consumed by the end-user. This process helps everyone in the entire chain since environments and processes are standardized. Every action in the chain is automated. It also gives freedom to all the stakeholders to concentrate their efforts on designing and coding a high-quality deliverable rather than worrying about the various building, operations, and QA processes. It brings down the time-to-live drastically to about 3-4 hours, from the time code is written and committed, to deployment on production for end-user consumption.



Where Can An Agile Transformation Lead Your Company?

The rituals of Agile development are largely procedural and tactical. In contrast, organizational agile transformation is driven by and reinforces cultural norms that make staying agile possible. A development lead can compel team members to participate in the process of daily scrums and weekly sprints. Agile development doesn’t address the task of building genuine collaboration or a culture of accountability. In contrast, an agile transformation requires cultural support to move the organization into a state of resonant agility. The state, in turn, reinforces and strengthens norms of collaboration and accountability that an agile culture encourages. An agile culture takes a broader view, beyond providing a prescriptive process for building something specific. It pulls together stakeholders from multiple functional areas to tackle an issue through organic, collaborative analysis. ... Next-generation technologies are purpose-built, not broad platforms that force conformity instead of innovation. There’s no one platform or suite of tools for an agile organization. Teams work with an organic tech stack that gives them the flexibility to use the best tool for the job, and everyone’s job is different.



Quote for the day:

"Effective team leaders adjust their style to provide what the group can't provide for itself." -- Kenneth Blanchard

Daily Tech Digest - July 02, 2021

Addressing the cybersecurity skills gap through neurodiversity

It’s time to challenge the assumption that qualified talent equals neurotypicality. There are many steps companies can take to ensure inclusivity and promote belonging in the workplace. Let’s start all the way at the beginning and focus on job postings. Job postings should be black and white in terms of the information they are asking for and the job requirements. Start by making job postings more inclusive and less constrictive in what is being required. Include a contact email address where an applicant can ask for accommodations, and provide a less traditional approach by providing these accommodations. Traditional interviews can be a challenge for neurodivergent individuals, and this is often the first hurdle to employment. For example, to ease some candidates’ nerves, you could provide a list of questions that will be asked as a guideline. More importantly, don’t judge someone based on their lack of eye contact. To promote an inclusive and belonging culture of neurodiversity in the workplace, the workplace should be more supportive of different needs.


Cost of Delay: Learn Why Your Organisation Is Losing Millions

Backlogs in business can cause a drop in revenue. This is why some experts say that if you want to make profit or save money, you have to prioritize your backlog in terms of money. Bear in mind that each product or project has different features or benefits. Consumers often think all these features are important. But the reality is that each feature takes a different time to create and implement. They also don’t have the same level of worth in the business. Prioritizing one means limiting or delaying the other. And every day that a feature is not in production means another day that the company is not profiting from it. By utilizing the Cost of Delay, the company can determine which feature will cost them the most by a delay in the delivery. It also sets a clear guideline on what projects would matter most for the company and other stakeholders without the friction of other decision making obstacles which bring us to the next point below. ... The MosCow method often puts everything in the “Must-Have” bucket. Imagine if your company has a limited resource and manpower. The quality of work and output will surely suffer.


Google releases new open-source security software program: Scorecards

These Scorecards are based on a set of automated pass/fail checks to provide a quick review of many open-source software projects. The Scorecards project is an automated security tool that produces a "risk score" for open-source programs. That's important because only some organizations have systems and processes in place to check new open-source dependencies for security problems. Even at Google, though, with all its resources, this process is often tedious, manual, and error-prone. Worse still, many of these projects and developers are resource-constrained. The result? Security often ends up a low priority on the task list. This leads to critical projects not following good security best practices and becoming vulnerable to exploits. The Scorecards project hopes to make security checks easier to make security easier to achieve with the release of Scorecards v2. This includes new security checks, scaled up the number of projects being scored, and made this data easily accessible for analysis. For developers, Scorecards help reduce the toil and manual effort required to continually evaluate changing packages when maintaining a project's supply chain. 


Kubernetes Fundamentals: Facilitating Cloud Deployment and Container Simplicity

Kubernetes has made containers so popular, they are threatening to make VMs (virtual machines) obsolete. A VM is an operating system or software program that imitates the behavior of a physical computer, and can run applications and programs as though it were a separate computer. A virtual machine can be unplugged from one computer, and plugged into another, bringing its software environment with it. Both containers and VMs can be customized and designed to any specifications desired and provide isolated processes. Both VMs and containers offer complete isolation, providing an environment for experimentation that will not affect the “real” computer. Typically, containers do not include a guest operating system, and usually come with only the application code, and only run the necessary operations needed. This is made possible by using “kernel features” from the physical computer. A kernel is the core program of a computer operating system, and has complete control over the entire system. On most computers, it is often the first program (after the bootloader) to be loaded on start-up.


IoT is the Key to Reopening Safe Workplaces

By implementing IoT connected devices for predictive cleaning, building managers can improve the overall efficiency and cleanliness of shared spaces. For example, IoT sensors can notify facility managers when soap dispensers and towels are running low so they can replace them immediately without a manual check. Predictive cleaning can lower infection rates and costs by enabling on-demand and as-needed cleaning to ensure common areas such as restrooms and conference rooms are safe for employees to use. Freespace created a Cleanreader solution that works by using sensors to collect occupancy data. It provides facility managers and cleaning staff with the data they need to ensure that desks, meeting rooms and communal areas are cleaned and disinfected between users. Our expectation as workers and consumers has reached a new baseline. We want to be able to see what businesses are doing to be safe and to know they are addressing how to avoid future impacts of this pandemic or any future major health crisis. Clearly, workers are concerned about the safety of their work environments. OSHA data shows more than 60,000 COVID-19-related complaints have been filed to the agency’s state and federal offices, as of March 28, 2021.


The Most Prolific Ransomware Families: A Defenders Guide

DomainTools researchers feel that it is important to remind readers that all of these groups make alliances, share tools, and sell access to one another. Nothing in this space is static and even though there is a single piece of software behind a set of intrusions there are likely several different operators using that same piece of ransomware that will tweak its operation to their designs. The playbook of the affiliate programs that many of these ransomware authors run is to design a piece of ransomware and then sell it off for a percentage of the ransom gained. Think of it as a cybercrime multi-level marketing scheme. Often there is a builder tool that allows the affiliate to customize the ransomware to their needs for a specific target which at the same time tweaks the software slightly so it can evade standard, static detection mechanisms. This article’s intent is not to dive deep into tracking individual affiliates or into each of the stages of a piece of packed malware (looking at you, CobaltStrike), but just to the top level of software used and their relations. Lastly, we must mention that access for the ransomware is often being provided by an initial backdoor or botnet, frequently called an initial access broker.


The next frontier of digital transformation: Are you onboard?

All the transformations are going to bring a lot of confidential data online and some in the public domain. This data will need sufficient protection from getting hacked and misused. So the next big digital transformation will be in the field of cybersecurity. Mathias cautions on the safety of customer data while adopting digital as a means of business. “Brands have to be very sensitive to data privacy concerns of consumers even as they need to provide a real time intuitive experience. This is a fine balance that many brands struggle with, as in the digital world users expect similar levels of customer experience from a local on-line retailer as they would from global giants like Amazon,” he adds. Tibrewala also noted that customer data is becoming more important than ever before. “Brands will need to invest in technologies like customer data platform and marketing automation to assimilate customer data; generate a single view of the customer across online and offline channels, and then use machine intelligence to provide the customer with the best possible solution for their requirement.”


Using collections to make your SQL access easier and more efficient

Collections are essentially indexed groups of data that have the same type—for instance arrays or lists (arrays, for instance, are collections of index-based elements). Most programming languages in general provide support for collections. Collections reduce the number of database calls due to caching (cached by the collections themselves) of regularly accessed static data. Reduced calls equals higher speed and efficiency. Collections can also reduce the total code needed for an application, further increasing efficiency. Each element in a collection has a unique identifier called a subscript. Collections come with their own set of methods for operating on the individual elements. PL/SQL includes methods for manipulating individual elements or the collection in bulk. ... Earlier versions of PL/SQL used what were known first as PL/SQL tables and later index-by tables. In a PL/SQL table, collections were indexed using an integer. Individual collection elements could then be referenced using the index value. Because it was not always easy to identify an element by its subscript, PL/SQL tables evolved to include indexing by alphanumeric strings.


Single page web applications and how to keep them secure

The architecture of SPAs presents new vulnerabilities for hackers to exploit because the attack surface shifts away from the client-layers of the app to the APIs, which serve as the data transport layer that refreshes the SPA. With multi-page web apps, security teams need to secure only the individual pages of the app to protect their sensitive customer data. Traditional web security tools such as web application firewalls (WAFs) cannot protect SPAs because they do not address the underlying vulnerabilities found in the embedded APIs and back-end microservices. For example, in the 2019 Capital One data breach, the hacker reached beyond the client layer by attacking Capital One’s WAF and extracted data by exploiting underlying API-driven cloud services hosted on AWS. SPAs require a proper indexing of all their APIs, similar to how multi-page web apps require an indexing of their individual pages. For SPAs, vulnerabilities begin with the APIs. Sophisticated hackers will often begin with multi-level attacks that reach through the client-facing app and look for unauthenticated, unauthorized, or unencrypted APIs that are exposed to the internet to hack and extract customer data.


Could cryptocurrency be as big as the Internet?

As with every nascent technology, of course, we are not yet seeing all of cryptocurrency’s potential. Yet, the winds of change are blowing. Payments is one small aspect of what Bitcoin and cryptocurrencies enable. With the unique ability of having programmatically financial instruments, the ecosystem of technology being built on top of that foundation is enabling diverse new use cases. Solutions like the Lightning Network on top of Bitcoin for fast, small payments, or collateral-based loans for fast liquidity, start to create possibilities beyond the foundational aspects of bitcoin and other cryptos. This could not have come at a better time. Following the pandemic, large retailers are increasingly determined to move to a 100% cashless model. For them, the cost of handling cash across thousands of different stores is an added expense that they want to divest. Moving to more digital payment structures, including the adoption of cryptocurrencies, is a path many will start to follow over the next year. Yet, there are also security benefits to consider as well. The cryptographic certainty of cryptocurrencies adds an extra security layer for financial institutions by eliminating forgery risk or counterparty risk that any other current financial instrument has today.



Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson

Daily Tech Digest - July 01, 2021

How CIO Roles Will Change: The Future of Work

On the IT side, CIOs sent workers home with laptops and video conferencing software last year. But it's time to reexamine whether those simple tools are adequate. Do workers need bigger displays? Do they need more than one monitor? What about webcams and better microphones, particularly if they are representing the corporate brand in virtual meetings with external partners and customers. Other technologies that are getting more attention include anything to do with security in this age of distributed work such as edge security, and VPNs. Companies are also reevaluating their unified collaboration and communications technologies as they look to enhance collaboration in a virtual setting. Employees are spending more time using software such as Microsoft Teams, Cisco Webex, and Zoom. How can those tools be improved? "CIOs have moved from infrastructure officers to innovation officers," Banting said. "CIOs are finding out what technology can do for the business, how it meets their needs, and how it makes them more agile by promoting distributed working. Technology can be used as an asset rather than a liability on the books. That's quite a fundamental shift in the IT department and the roles that CIOs play."


Composable commerce: building agility with innovation

Composable commerce is a microservices and modularised architecture that provides organisations with agility through quick, application programming interface (API) driven integrations, from catalogues and product searches, to order submissions, inventory, and recommendations. It provides seamless communication between various applications, giving customers new ways to interact and connect with brands on a personal level. Development teams can focus their efforts on speed and innovation, while operations can make time for back-end updates, compliance releases, and testing. All this can be done without affecting front or back-end operations. It provides collaboration between departments so development, operations, marketing, ecommerce, data, finance, and other areas can align and become an agile platform. Everything can work together cohesively and with siloes no longer existing, products can be brought to market quickly and efficiently without manual intervention.


New approaches for a new era: the mission-critical tools for post-Covid business success

Taking an agile approach enables workforces – especially project management teams – to adapt quickly and easily, promoting creative, out-of-the-box thinking throughout the business. Businesses that have embraced business agility have found that teams work better together, and their decision-making processes often become much quicker than would have been possible otherwise. To enable adaptability, employers need to find ways to drive employee engagement and efficiency regardless of where people are. ... The uptake of innovative technologies that drive true workplace collaboration spans broader work management platforms offered by a range of global providers, communication apps such as Microsoft Teams and Slack, and toolchains for developing and deploying software such as Azure DevOps. Their use has been made easier because they can often be integrated, allowing teams to use the tools they want for various purposes while still keeping collaborative efforts connected. These types of intuitive solutions enable enterprises to rapidly adjust tactics, resources and personnel to keep operations on course when business conditions shift dramatically – providing organizations with a competitive edge through the current health and economic crisis and in a post-Covid world.


Microsoft and Google prepare to battle again after ending six-year truce

The pact was reportedly forged to avoid legal battles and complaints to regulators. It meant we haven’t seen Microsoft and Google complaining publicly about each other since the days of Scroogled, a campaign that attacked Google’s privacy policies. Now the gloves appear to be off once again, and we’ve seen some evidence of that recently. Google slammed Microsoft for trying to “break the way the open web works” earlier this year, after Microsoft publicly supported a law in Australia that forced Google to pay news publishers for their content. Microsoft also criticized Google’s control of the ad market, claiming publishers are forced to use Google’s tools that feed Google’s revenues. The rivalry between the two has been unusually quiet over the past five years, thanks to this legal truce. Microsoft was notably silent during the US government’s antitrust suit against Google last year, despite being the number two search engine at the time. The Financial Times reports that the agreement between Microsoft and Google was also supposed to improve cooperation between the two firms, and Microsoft was hoping to find a way to run Android apps on Windows.


Continuous Integration and Deployment for Machine Learning Online Serving and Models

One thing to note is we have continuous integration (CI)/continuous deployment (CD) for models and services, as shown above in Figure 1. We arrived at this solution after several iterations to address some of MLOps challenges, as the number of models trained and deployed grew rapidly. The first challenge was to support a large volume of model deployments on a daily basis, while keeping the Real-time Prediction Service highly available. We will discuss our solution in the Model Deployment section. The memory footprint associated with a Real-time Prediction Service instance grows as newly retrained models get deployed, which presented our second challenge. A large number of models also increases the amount of time required for model downloading and loading during instance (re)start. We observed a great portion of older models received no traffic as newer models were deployed. We will discuss our solution in the Model Auto-Retirement section. The third challenge is associated with model rollout strategies. Machine learning engineers may choose to roll out models through different stages, such as shadow, testing, or experimentation. 


After EI, DI?

In thinking through what a practical model of digital intelligence might look like, we thought it would be useful to identify three elements that make up best practices for operating in a digital environment. One is the analytical and cognitive component — in essence, how to make sense of the welter of information and data that the digital world offers. The second is the need to collaborate with others in new ways and new mediums. The third is the practical mastery and application we need to demonstrate. This third element is akin to how Robert J. Sternberg, James C. Kaufman and Elena L. Grigorenko describe “practical intelligence”; that is to say, how we manage real world situations or, in our case, navigate the digital world successfully. This is an ability, we would argue, that entails a different or least greatly modified set of skills from that we use in face-to-face environments. ... We aren’t proposing that digital intelligence be treated as true intelligence, but rather as a loose framework to help us identify the knowledge, skills, attitudes and behaviors that make up the “digital sensibility” needed to operate and succeed in increasingly digital organizations and marketplaces. 


SRE vs DevOps: Comparing Two Distinct Yet Similar Software Practices

CTOs, product managers, software executives, process specialists are looking for newer ways to enhance the trustworthiness of their software systems without any compromise on the speed and quality. SRE and DevOps are two such software methodologies that are popular today, in the world of software development. What does SRE stand for? SRE stands for Site Reliability Engineering. Both these procedures are supposed to be sharing a similar line of principles and goals that makes them compete. They look like two sides of the same coin, targeting to lessen the gap between the development and operation teams. Yet, they have their own distinct characteristics that make them contrast. Rather than being two competing procedures for software operations, SRE and DevOps are more like pals that work together to solve organizational hurdles and deliver software in a fast manner. It is interesting to understand what these concepts individually mean, what they have in common, how they differ from each other, and how they fit each other like missing pieces of any puzzle.


How to support collaboration between security and developers

Like everyone else, security people want to see the company succeed, and see cool stuff happen. Developers also care about more than just delivery of code; plus they know that if something bad happens, there are significant implications that they want to avoid. While open lines of communication and mutual understanding are key it is equally important that DevSecOps teams have a toolset that is similarly integrated and capable of tracking and addressing the changes that might be happening in your organization. Whether we’re talking about changes in cloud providers, the deployment stack, or something else, there is a clear need to have a platform that will work where you are—in the cloud or on-premises. ... While tools are an essential element of enabling DevSecOps, there remain other challenges to be resolved. These include the “unknown unknowns” that organizations encounter as they speed up their digital transformation. For example, organizations across the board rushed to scale up their cloud environments in response to the pandemic last year. However, when rushing to do so many did not scale up their security and governance processes at the same time and rate.


5 Mistakes I Wish I Had Avoided in My Data Science Career

Do I want to be a data engineer or a data scientist? Do I want to work with marketing & sales data, or do the geospatial analysis? You may have noticed that I have been using the term DS so far in this article as a general term for a lot of data-related career paths (e.g. data engineer, data scientist, data analyst, etc.); that’s because the lines are so blurred between these titles in the data world these days, especially in smaller companies. I have observed a lot of data scientists see themselves as ONLY data scientists building models and don’t pay attention to any business aspects, or data engineers who only focus on data pipelining and don’t want to know anything about the modeling that’s going on in the company. The best data talents are the ones who can wear multiple hats or are at least able to understand the processes of other data roles. This comes in especially handy if you want to work in an early stage or growth stage startup, where functions might not be as specialized yet and you are expected to be flexible and cover a variety of data-related responsibilities. 


Responsible applications of technology drive real change

Thanks to the Digital Revolution, many things that seemed impossible just a few years ago are now commonplace. No one can deny that our productivity – and indeed, enjoyment – has been dramatically improved by technologies ranging from AI to Big Data, 5G and the IoT. While new applications for these technologies are being found seemingly every day, it’s increasingly important to ask how we can utilise technology in a responsible way, to change and improve people’s lives in critical areas like education, healthcare and the environment. The good news is that work is already underway to apply technology in meaningful ways. Take, for instance, the support being provided for young African women programmers in marginalised communities. They are benefitting from free online training and free access to cloud computing resources. The aim of this project is to create one million female coders by 2030 with the objective of improving their life outcomes by helping them along a career path in engineering and other practical subjects. The iamtheCODE initiative provides them with tailored courses on a range of technical topics including cloud computing, data analysis, machine learning and security.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis