Daily Tech Digest - November 10, 2021

All your serverless are belong to us

Though serverless has been enabled by the clouds, serverless functions aren’t simply a big cloud game. As Vercel CEO (and Next.js founder) Guillermo Rauch details in the Datadog report, “Two years ago, Next.js introduced first-class support for serverless functions, which helps power dynamic server-side rendering (SSR) and API routes. Since then, we’ve seen incredible growth in serverless adoption among Vercel users, with invocations going from 262 million a month to 7.4 billion a month, a 28x increase.” From such examples, and many others (including ever shorter function invocation times, which indicate that enterprises are becoming more proficient with functions), it’s clear that serverless computing has taken off. Vendors will continue to press the “no lock-in” marketing button, but customers don’t seem to care. Rather, they may care about lock-in, but they care much more about accelerating their time to customer value. In enterprise computing, as in life, there are always trade-offs. The cost of a perfectly lock-in-free existence is lowest-common-denominator code that is generic across hardware/cloud platforms.


5 Things To Remember When Upgrading Your Legacy Solution

In opposition to a common idea, a legacy framework does not necessarily be old. The most negative part of these frameworks is that they are still employed, even despite the fact that they frequently fail to meet critical demands and support core business operations as they are meant to. So, let’s face it - when there is legacy software you cannot replace, you should at least go for modernization. "Legacy systems are not that safe," says Daniela Sawyer, Founder and Business Development Strategist of FindPeopleFast.net. "This happens because, being the older technology, they are not usually supported by the company or the vendor who created it in the first place. Also, it lacks having regular updates and patches to maintain the pace with the modern world. So the new update should ensure the security aspect precisely." Although it might appear as though you're saving costs when you don't spend money updating your digital product, that might cost you much more over the long haul. 


The role of visibility and analytics in zero trust architectures

There are three NIST architecture approaches for ZTA that have network visibility implications. The first is using enhanced identity governance, which (for example) means using identity of users to only allow access to specific resources once verified. The second is using micro-segmentation, e.g., when dividing cloud or data center assets or workloads, segmenting that traffic from others to contain but also prevent lateral movement. And finally, using network infrastructure and software defined perimeters, such as zero trust network access (ZTNA) which for example allows remote workers to connect to only specific resources. NIST also describes monitoring of ZTA deployments. Outlining that network performance monitoring will need security capabilities for visibility. This includes that traffic should be inspected and logged on the network (and analyzed to identify and reach to potential attacks), including asset logs, network traffic and resource access actions. Furthermore, NIST expresses concern about the inability to access all relevant and encrypted traffic – which may originate from non-enterprise-owned assets or applications and/or services that are resistant to passive monitoring.


How business and IT can overcome the data governance challenge

Business heads and their teams, after all, are the ones who have the knowledge about the data – what it is, what it means, who and what processes use it and why. As well as what rules and policies should apply to it. Without their perspective and participation in data governance, the enterprise’s ability to intelligently lockdown risks and enable growth will be seriously compromised. However, with their engagement, sustainable payback will be achieved and the case for continuing commitment by the enterprise to data governance will be easier to justify. It is vital, however, that modern data governance is a strategic initiative. A data governance strategy is the foundation upon which to build a muscular data-driven organization. Appropriately implemented – with business data stakeholders driving alignment between data governance and strategic enterprise goals and IT handling the technical mechanics of data management – the door opens to trusting data and using it effectively. Data definitions can be reconciled and understood across business divisions, knowledge base quality can be guaranteed, and security and compliance do not have to be sacrificed even as information accessibility expands. 


Data Advantage Matrix: A New Way to Think About Data Strategy

For SAAS companies, the funnel is everything. Optimizing metrics at every stage of the funnel is what accelerates SAAS companies from average to exponential growth. So for any SAAS founder, if you don’t have basic operational analytics set up on day one, you’re probably doing something wrong. This fictional SAAS startup would start at the top left of the matrix with basic operational analytics. These analytics don’t have to be complicated. At Stage 1, it’s all about getting the basics right — measuring the number of leads per day, users converting on the site, users signing up on the product, free trials that end up paying, etc. Given the importance of operational analytics, it would make sense for this startup to move to Stage 2 pretty quickly — converting its basic analytics into something more scalable like a centralized intelligence engine. This would include investing in a data warehouse that brings all data into one place, adding a BI tool, and hiring the first analysts to drive data-driven decisions where it matters most.


Fintech has a gender diversity problem — here’s how we tackle it

Diversity can both empower individuals and spark feelings of inclusion across society. It encourages different perspectives and promotes tolerance and understanding amongst workplaces. And in business, quite rightly, the topic has entered the mainstream. Take the engineering industry as an example, where gender equality figures are showing an encouraging steady upwards trajectory. In law, female representation across the world is also reputable. Both show gender equality is slowly, but surely, moving in the right direction, but sadly in financial technology (Fintech), the same is yet to be realised. A report by Innovate Finance found women still account for less than 30% of the Fintech workforce, with less than 20% in executive positions. By 2026, the industry is estimated to grow by 20%, hitting the $324 billion mark in value, meaning that the gender gap will soon widen even further. But how can Fintech continue to progress and thrive if it isn’t a desirable industry for all? Clearly, the industry needs to do more. The question is, how?


The IT Talent Crisis: 2 Ways to Hire and Retain

While CompTIA’s research notes that money is the top reason for workers to leave, another factor is the lack of opportunity. “Our research indicates that a top reason tech workers consider leaving is a lack of career growth opportunities, a telling message to employers not to underestimate the value of investing in staff training and professional development,” said Tim Herbert, executive VP for research and market intelligence at CompTIA, in a press release. Investing in employee training during a labor crunch can also have downsides if employees take advantage of training and then use those added skills to parlay their way into a new opportunity elsewhere. But if employees are leaving, they are also going somewhere, too. Pyle recommends that organizations not only look carefully at their compensation package offers but also consider casting a wider net for candidates by looking outside of your usual geography. “The hybrid work environment works,” he says. “People can work from anywhere. If we are bringing in the right talent we can bring them in from anywhere, as long as they can do the job.”


CIO role: How to move from gatekeeper to advisor

Historically, the role of the CIO focused on identifying, implementing, and maintaining business IT systems, with budget set aside to explore and drive innovation within the organization. Moving into the 2000s and 2010s, CIOs were tasked with spearheading digital transformation and the journey to the cloud. Today’s CIO must work as an advisor and partner to departments across the organization, seeking to understand the needs of the wider business and ensuring that those needs are met in a way that works both for the individual and the wider business aims. However, this distributed approach brings challenges: How, for example, does IT respond to a vulnerability in a piece of software that IT did not know was running? Many IT leaders tell us that the barrier between shadow IT and business-led IT is becoming more and more blurred and is forcing tradeoffs around risk vs. flexibility. It is IT’s role to not only facilitate business needs but also ensure compliance and security. This creates tension between offering advice vs. imposing governance.


Hackers Disrupt Canadian Healthcare and Steal Medical Data

The attack has resulted in ongoing disruptions to care in addition to exposed data. The province is comprised of four regional health authorities, although data was not stolen from all of them: Western Health - no data believed to have been stolen, Central Health - data exposure unclear, Eastern Health - 14 years of data exposed, and Labrador-Grenfell - 9 years of data exposed. Officials say they're attempting to restore systems from backups, and that the process remains underway and is not yet complete. On Thursday, for example, public broadcaster CBC reported that while the Health Sciences Center hospital in the city of St. John's had restored its Meditech system, which handles patient health information and financial details, it only included information from before the attack. Each health authority has been publishing its own updates on the ongoing disruptions it continues to face. Through at least Wednesday, for example, Western Health noted that only some appointments would be proceeding, including chemotherapy appointments "at a reduced capacity."


The Renaissance of Code Documentation: Introducing Code Walkthrough

As inline comments describe the specific code area they are attached to without a broader scope, they are always limited. As for high-level documentation, they can indeed provide the big picture, but they lack the details that developers need for their work. For example, in the documentation about extending git’s source code, you can definitely describe something like the general process of creating a new git command in a high-level document. However, you won’t be able to do so effectively without getting into specific details and giving examples from the code itself. ... Code-Walkthrough Documentation takes the reader on a “walk” made up of at least two stations within the code. They describe flows and interactions and they may rely on incorporating code snippets or tokens to do so. In other words, they are code-coupled, in accordance with the principles of Continuous Documentation. This kind of document provides an experience similar to getting familiarized with a codebase with the help of an experienced contributor to the codebase - when the latter walks you through the code.



Quote for the day:

"You can't be a leader if you can't influence others to act." -- Dale E. Zand

Daily Tech Digest - November 09, 2021

Bias still dominates the discussion of AI adoption in business

Organisations are at last beginning to take ethical standpoints on machine learning and its role in automated decision-making. According to HBR, companies (including Google, Microsoft, BMW and Deutsche Telekom) are creating internal AI policies, making commitments to fairness, safety, privacy and diversity. Organisations must recognise machine learning as a predictive technology that requires the application of judgement—a key part of any such policy—ensuring interpretability and, consequently, trust. While it might be hard to remove bias from your data entirely, you can effectively minimise the effects of that bias by applying a layer of systemised judgement. This turns predictions into decisions that can be trusted. To achieve this you need technology that can efficiently and transparently automate that governance process. New platforms enable firms to apply machine-learnt predictions safely by incorporating a layer of automated human judgement into their systems.


Failing Fast: The Impact of Bias When Speeding Up Application Security

You have a tools bias if you're spending thousands of dollars on tools and systems to integrate them into your development lifecycle. Not every tool needs to cost you a lot of money. There's a great deal of amazing, free open source tools out there. Not everyone needs to be spending that much money. Do you have tools purchased but not properly implemented into your build pipeline? Maybe they were put in and then they were removed because they were causing you pain. Or maybe you got them and you put them in a learning mode, but you never got them fully installed. That's a tool bias. You've spent the time and focus because the tool will solve the problem, but we've not actually solved the problem. We got halfway there and stopped. If there's no plan for maintaining, tuning, or configuring tools post-purchase, also known as a sales-person driven development style, then you've got a tools bias. Your tool has not made you more secure. Your tool has given you the feeling of security, but without the actual action.


Why regulation of tech platforms is the new game changer for strategy

Regulation is proving pivotal in conflicts created when traditional firms compete with or participate in ecosystems dominated by big tech. How many of the profit opportunities created by new regulation will be gobbled up by big tech, and how much of that profit can be internalized by their partners? For instance, regulators are asking, Is it appropriate for a dominant ecosystem orchestrator like Apple to forbid content providers from accessing customers and demanding payments directly? And, given the modest effort Apple put into setting up its App Store, is its 30% cut from every app sold there a fair practice or a blatant abuse of dominant position? Epic Games’ recent lawsuit against Apple (which centered around how people pay for the Fortnite game) sailed bravely into these unchartered waters; the judge ultimately ordered Apple to reverse some, if not all, of its practices. Consider also the drama currently playing out in digital advertising. Big tech firms, supported by their ecosystem partners, have helped spawn a successful industry focused on understanding the profile of individual customers and offering them tailored advertising. 


Why are we still asking KBA questions to authenticate identity?

The federal government has long acknowledged the risks presented by KBAs and the NIST’s own guidelines expressly disavows KBA for digital applications: “The ease with which an attacker can discover the answers to many KBA questions, and the relatively small number of possible choices for many of them, cause KBA to have an unacceptably high risk of successful use by an attacker.” Meanwhile a study by Google found that only 47% of people could remember what they put down as their favorite food a year earlier – and that hackers were able to guess the food nearly 20 percent of the time, with Americans’ most common answer (of course) being pizza. And even when a user does remember the correct answer to one of these questions, they sometimes forget the precise form of their answer, all of which leads to a frustrating customer experience. Protracted verification times inevitably lead to customer abandonment of transactions such as opening a new account, resulting in delayed or lost business. Unsurprisingly, the longer it takes to verify a customer’s identity, the more likely it is they will abandon the process entirely.


Thousands Now Find Success with OKRs, Why Aren't You?

Objectives and Key Results (OKRs) is a flexible tool that helps people, organizations achieve their goals by erecting specific and measurable actions. It also helps them communicate and monitor progress towards them. Objectives can either be short and inspirational. It defines the goal you want to achieve. For companies, they are capable of creating three to five high-level objectives per quarter of the year. This helps them increase their brand awareness and these objectives are meant to be ambitious. Choosing the right objective for your goal can be a challenging aspect of this practice but when it's done correctly, you can tell if you have reached your objective. Key Results helps you deliver each set of objectives perfectly, so you can be able to measure your progress in achieving your goals. ... OKRs are a flexible framework, and because of this, you can set and phrase OKRs in different ways. Think of it as the pillar of your strategy for the next period. To come up with good OKRs, I will advise that connect them to your day-to-day activities.


How can we eliminate gender bias in tech?

There’s clear evidence of professional prejudice against working mothers — women are passed up for job progression and prevented from exploring other opportunities. This is called the ‘motherhood penalty’. On average, women lose 4% of hourly earnings when they start a family; a significant amount when taken as a proposition of lifetime earnings. Compared to men who gain an average pay rise of 6% after becoming fathers. Moving forward, employers must make clear to female staff that they will be judged purely on performance, not on their working schedules – opening the door to more flexible working options, letting women advance professionally without jeopardising family commitments. Likewise, the stigma around shared parental leave must be addressed, normalising a man’s role as equal caregiver when tending to a new-born. With more equitable paternity policies, female staff will be better enabled to pursue senior leadership roles.


The Crypto Industry Isn’t Too Thrilled About Biden’s Big Policy Moves

Despite some heavy lobbying by crypto lobbyists back in August to clarify the definition of “broker” as it applies to digital assets, the proposed bill passed the Senate without any amendments. The bill was introduced and voted through the Senate within a week in August. While the bill was awaiting House approval, I spoke to some crypto tax lawyers in the U.S. about how things might play out if it is signed into law without amendments. Nathan Giesselman, a partner at Skadden, Arps, Slate, Meagher & Flom LLP, told me that, as it is written in the bill, the provision runs the risk of capturing folks like miners and developers who don’t have the same customer information that a traditional broker might have, putting them in the awkward position of not being able to comply with the required reporting. Now that the House has passed the bill, it’s clear that much will depend on how the U.S. Treasury Department interprets the definition of broker.

 

Six AI and Big Data Trends in Banking for 2022

Big data and AI requires intense computing horsepower, so banks and credit unions are increasingly turning to the cloud to host data and applications. Not only is the cloud able to scale to handle high computing demands, but does it cost effectively. IDC states that global spending on cloud services — including hardware and software — will surpass $1.3 trillion by 2025, growing at a CAGR of 16.9%. Both shared (pubic) cloud and dedicated (private) cloud are slated to grow, says IDC, with private cloud growing at a faster rate. Since bank legacy systems weren’t designed for distributed computing environments, moving them to the cloud is challenging. However, banks and credit unions are softening up to the idea of moving legacy systems not just to the cloud but transforming them to cloud-native platforms, although few have made the leap to a fully cloud-based environment. JPMorgan Chase and Arvest Bank, have both announced that they will switch portions of their core systems to a cloud-native platform. 


The cyber insurance dilemma: The risks of a safety net

Every company owner should be aware of what they are looking for when it comes to cyber insurance. They should always read the fine print and understand the specifics of coverage, deductibles, and exclusions. This safety net can be highly effective if the policy is correctly written and the business is fully aware of its coverage. According to Dan Burke, the Vice President at Woodruff Sawyer (a national insurance provider), cyber insurance typically doesn’t cover three types of losses: potential future lost profits, loss of value due to the theft of intellectual property, and betterment (i.e., the cost to improve internal technology systems after the attack, such as IT upgrades after a cyber event). That said, losses other than the initial ransom are not likely to be covered by insurance. Today, most ransomware attacks do not stop at the initial breach. Take the SolarWinds incident as an example: instead of locking SolarWind’s IT systems, attackers planted malicious code into the company’s Orion technology platform, which is used by more than thirty thousand customers, including the U.S.


Why organisations need to take charge of Office 365 backup and recovery

No matter the size of your organisation, if you’ve automated the backup process for your Office 365 environments, then you’ve taken a big first step to protect your data and ensure its quick recovery. Keep in mind that access to regularly backed up files significantly improves the chances of recovering from a system outage or malware attack. Find a solution that will let you effortlessly pinpoint SaaS data and records. Organisations need to be able to perform targeted restores, preserve critical data sets, and manage production and sandbox environments with ease. Some of this will come down to granular search and restore, but it’s also a good idea to implement point-in-time and version-level recovery tools and immediate restores. Staying secure means that it’s easier to stay compliant. Look for a solution that offers stringent standards, privacy protocols, and zero-trust access controls — this could also include isolated, air-gapped backups from source data, built-in GDPR compliance, and encrypted data when at rest or in-flight. Multi-layering your security also means you can add role-based, SSO, SAML authentication controls too.



Quote for the day:

"Curiosity is the thing that sparks a step into an adventure." -- Annie Lennox

Daily Tech Digest - November 08, 2021

A New Quantum Computing Method Is 2,500 Percent More Efficient

Today, most quantum computers can only handle the simplest and shortest algorithms, since they're so wildly error-prone. And in recent algorithmic benchmarking experiments executed by the U.S. Quantum Economic Development Consortium, the errors observed in hardware systems during tests were so serious that the computers gave outputs statistically indiscernible from random chance. That's not something you want from your computer. But by employing specialized software to alter the building blocks of quantum algorithms, which are called "quantum logic gates," the company Q-CTRL discovered a way to reduce the computational errors by an unprecedented level, according to the release. The new results were obtained via several IBM quantum computers, and they also showed that the new quantum logic gates were more than 400 times more efficient in stopping computational errors than any methods seen before. It's difficult to overstate how much this simplifies the procedure for users to experience vastly improved performance on quantum devices.


Design Patterns for Machine Learning Pipelines

Design patterns for ML pipelines have evolved several times in the past decade. These changes are usually driven by imbalances between memory and CPU performance. They are also distinct from traditional data processing pipelines (something like map reduce) as they need to support the execution of long-running, stateful tasks associated with deep learning. As growth in dataset sizes outpace memory availability, we have seen more ETL pipelines designed with distributed training and distributed storage as first-class principles. Not only can these pipelines train models in a parallel fashion using multiple accelerators, but they can also replace traditional distributed file systems with cloud object stores. Along with our partners from the AI Infrastructure Alliance, we at Activeloop are actively building tools to help researchers train arbitrarily large models over arbitrarily large datasets, like the open-source dataset format for AI, for instance. ... Even though the problem of transfer speed remained, this design pattern is widely considered as the most feasible technique for working with petascale datasets.


Daily Standup Meetings are useless

Standup are not about technical details, even though some technical context can help to frame the arisen complexity and allow PjM and Leads to take the necessary steps to enable you achiving your tasks ( additional meetings, extending the deadline, reestimating the task within the sprint, set up pair programming sessions etc). According to the official docs the Daily Scrum is a 15-minute event for the Developers of the Scrum Team. The purpose of the Daily Scrum is to inspect progress toward the Sprint Goal and produce an actionable plan for the next day of work. This creates focus and improves self-management. Daily Scrums improve communications, identify impediments, promote quick decision-making, and consequently eliminate the need for other meetings. Honestly I am not so sure about the last point, due to the short time allocated to it, it can indeed generate other meetings, follow-ups between Tech Lead and (some of ) the developers, or between the team and the stakeholders, or among developers which decide to tackle an issue with pair programming. 


When Is the Waterfall Methodology Better Than Agile?

Is waterfall ever better? The short answer is yes. Waterfall is more efficient, more streamlined, and faster when it comes to specific types of projects like these: Generally speaking, the smaller the project, the better suited it is to waterfall development. If you’re only working with a few hundred lines of code or if the scope of the project is limited, there’s no reason to take the continuous phased approach; Low priority projects – those with minimal impact – don’t need much outside attention or group coordination. They can easily be planned and knocked out with a waterfall methodology; One of the best advantages of agile development is that your clients get to be an active part of the development process. But if you don’t have any clients, that advantage disappears. If you’re working internally, there are fewer voices and opinions to worry about – which means waterfall might be a better fit; Similarly, if the project has few stakeholders, waterfall can work better than agile. If you’re working with a council of managers or an entire team of decision makers, agile is almost a prerequisite. 


To Secure DevOps, Security Teams Must be Agile

Focusing on a pipeline using infrastructure-as-code allows security teams to build in static analysis tools to catch vulnerabilities early, dynamic analysis tools to catch issues in staging and production, and policy enforcement tools to continuously validate that the infrastructure is compliant, Leitersdorf said. "If you think about how security can be done now, instead of doing security at the tail end of the process ... you can now do security from the beginning through every step in the process all the way to the end. Most security issues will be caught very early on, and then a handful of them will be caught in the live environment and then remediated very quickly," he said. Developers get to retain their speed of development and deployment of applications and, at the same time, reduce the time to remediate security issues. And security teams get to collaborate more closely with DevOps teams, he said. "From a security team perspective, you feel better, you feel more confident, you have guardrails around your developers to reduce the chance of making mistakes along the way and building insecure infrastructure and you now have visibility into their DevOps process, a huge bonus," Leitersdorf said.


How to Avoid Vulnerabilities in Your Code

In general, the "minor problems" are precisely those that result in the most extensive security disasters, and we can say that failures usually present two gaps: We encountered flaws such as insecure code design, injection, and configuration issues through a code vulnerability intentionally or unintentionally: within the TOP 10 demonstrated by OWASP; Through the vulnerability of operations: among the most common problems, we can mention the choice for "weak passwords", default or even the lack of password. A second common failure is the mismanagement of people's permission to a document or system. These types of problems, unfortunately, are pretty standard. Not by chance, 75% of Redis servers have issues of this type. In an analogy, we can say that security flaws are like the case of the Titanic. Considered one of the biggest wrecks, most people are unaware that the ship had a "small problem": the lack of a simple key that could have opened the compartment with binoculars and other devices to help the crew visualize the iceberg in time and prevent a collision.


Taking Threat Detection and Response to the Next Level with Open XDR

XDR fundamentally brings all the anchor tenants that are required to detect and respond to threats into a simple, seamless user experience for analysts that automates repetitive work. Bringing together all the required context enables analysts to take action quickly, without getting lost in a myriad of use cases, different screens and workflows and search languages. It can also help security analysts respond quickly without creating endless playbooks to cover every possible scenario. XDR unifies insights from endpoint detection and response (EDR), network data and security analytics logs and events as well as other solutions, such as cloud workload and data protection solutions, to provide a complete picture of potential threats. XDR incorporates automation for root cause analysis and recommended response, which is critical in order to respond quickly with confidence across a complex IT and security infrastructure. Whether your primary challenge is the complexity of tools, data and workflows or preventing a ransomware actor from laterally moving across your environment, quickly detecting and containing threats is of the essence.


Why Your Code Needs Abstraction Layers

By creating your abstraction in one layer, everything related to it is centralized so any changes can be made in one place. Centralization is related to the “Don’t repeat yourself” (DRY) principle, which can be easily misunderstood. DRY is not only about the duplication of code, but also of knowledge. Sometimes it’s fine for two different entities to have the same code duplicated because this achieves isolation and allows for the future evolution of those entities separately. ... By creating the abstraction layer, you expose a specific piece of functionality and hide implementation details. Now code can interact directly with your interface and avoid dealing with irrelevant implementation details. This improves the code readability and reduces the cognitive load on the developers reading the code. Why Because policy is less complex than its details, so interacting with it is more straightforward. ... Abstraction layers are great for testing, as you get the ability to replace details with another set of details, which helps isolate the areas that are being tested and properly create test doubles.


How to get more women in tech and retain them

When it comes to the demand side, although much progress has been made in terms of escalating diversity up the corporate agenda, there still needs to be a seismic shift in employment practices. Specifically, there needs to be a sweeping change in mindset amongst hiring managers, who still often hire based on ‘cultural fit’, as well as a new approach to how companies help women to break the glass ceiling, through mentoring, career development support and family friendly policies. In the case of recruitment, hiring for cultural fit tends to favour the status quo in a company, whether that relates to race, gender, age, socioeconomic level and so on. That makes it harder for anyone who doesn’t ‘fit the mould’ to get into sectors where they are currently under-represented. Instead, hiring managers ought to look towards integrating ‘value fit’ in their hiring process. The values of a business are the things that drive the way they work and can include everything from teamwork and collaboration, to problem-solving and customer focus. Yes, you need to ensure your employees don’t clash with one another, but a value-fit approach will often lead to this outcome too.


How to Reduce Burnout in IT Security Teams

When our work and personal life interact or become a blur between what is personal life and what is work life, this is when we have to look at our setup. Working from home allows employees to have more flexibility and focus on their work. It also cuts down on commute hours. However, some of us are still trying to separate our work life from our personal life throughout the pandemic. Think about it. We take calls in our kitchen or bedroom at times. Kitchen and bedroom are personal life spaces, not work life spaces. Having a separate space for work helps a lot for balancing. But not everyone has this privilege; as a consequence, the blurriness of work and personal life can impact us, and burnout can creep in. However, if you section a part of the room as a work space, and only use that particular spot for work, it does help. Lastly, no matter what, have work boundaries set, such as turning off work equipment at 6PM during the week and off the whole weekend, and keep those boundaries in place.



Quote for the day:

"The actions of a responsible executive are contagious." - Joe D. Batton

Daily Tech Digest - November 02, 2021

Complexity is killing software developers

“There is more to this profession than writing code; that is the means to an end,” Hightower said. “Maybe we are saying we have built enough and can pause on building new things, to mature what we have and go back to our respective roles of consuming technology. Maybe this is the happy ending of the devops and collaboration movement we have seen over the past decade.” The market is responding to this complexity with an ever-growing list of opinionated services, managed options, frameworks, libraries, and platforms to help developers contend with the complexity of their environment. “No vendor is or will be in a position to provide every necessary piece, of course. Even AWS, with the most diverse application portfolio and historically unprecedented release cadence, can’t meet every developer need and can’t own every relevant developer community,” O’Grady wrote in a 2020 blog post. That being said, “there is ample evidence to suggest that we’re drifting away from sending buyers and developers alike out into a maze of aisles, burdening them with the task of picking primitives and assembling from scratch.


Securing SaaS Apps — CASB vs. SSPM

There is often confusion between Cloud Access Security Brokers (CASB) and SaaS Security Posture Management (SSPM) solutions, as both are designed to address security issues within SaaS applications. CASBs protect sensitive data by implementing multiple security policy enforcements to safeguard critical data. For identifying and classifying sensitive information, like Personally Identifiable Information (PII), Intellectual Property (IP), and business records, CASBs definitely help. However, as the number of SaaS apps increase, the amount of misconfigurations and possible exposure widens and cannot be mitigated by CASBs. These solutions act as a link between users and cloud service providers and can identify issues across various cloud environments. Where CASBs fall short is that they identify breaches after they happen. When it comes to getting full visibility and control over the organization's SaaS apps, an SSPM solution would be the better choice, as the security team can easily onboard apps and get value in minutes — from the immediate configuration assessment to its ongoing and continuous monitoring.


11 cybersecurity buzzwords you should stop using right now

The terms whitelist and blacklist date back to the some of the earliest days of cybersecurity. Associating “white” with good, safe, or permitted, and “black” with bad, dangerous, or forbidden, the phrases are still commonly applied to allow or deny use or access relating to various elements including passwords, applications, and controls. Cybersecurity consultant Harman Singh thinks the terms need urgently replacing because of harmful racial overtones associated with them, suggesting allow lists and deny lists serve the same purpose without potentially damaging connotations linked to ethnicity and race. “This is such a small yet significant, change” he tells CSO. “The NCSC made this conscious change last year to avoid racial tone. Still only a handful of companies in the industry have thought about doing this. Why don’t we all follow this example to stamp out such terms?” In a blog post, Emma W, head of advice and guidance at the NCSC, wrote: “You may not see why this matters. If you’re not adversely affected by racial stereotyping yourself, then please count yourself lucky. For some of your colleagues, this really is a change worth making.”


How to Get Started with Competitive Programming?

First and foremost what you need to do is pick out your preferred programming language and become proficient with its syntax, fundamentals, and implementation. You need to make yourself familiar with built-in functions, conditional statements, loops, etc. along with the required advanced concepts such as STL library in C++ or Big Integers in Java. There are various languages out there that are suitable for Competitive Programming such as C, C++, Java, Python, and many more  ... What you need to know – you’ll be suggested by some individuals that it is not necessary to learn DSA priorly for getting started with CP and it can be done along the way however, we recommended you to at least cover the DSA fundamentals like Array, Linked List, Stack, Queue, Tree, Searching, Sorting, Time and Space Complexity, etc. before starting to solve problems and doing competitive problems as it’ll help you to feel confident and solve a majority of the problems. Without knowing Data Structures & Algorithms well, you won’t be able to come up with an optimized, efficient, and ideal solution for the given programming problem.


‘Trojan Source’ Bug Threatens the Security of All Code

“It is already hard for humans to tell ‘this is OK’ from ‘this is evil’ in source code,” Weaver said. “With this attack, you can use the shift in directionality to change how things render with comments and strings so that, for example ‘This is okay” is how it renders, but ‘This is’ okay is how it exists in the code. This fortunately has a very easy signature to scan for, so compilers can [detect] it if they encounter it in the future.” The latter half of the Cambridge paper is a fascinating case study on the complexities of orchestrating vulnerability disclosure with so many affected programming languages and software firms. ... “We met a variety of responses ranging from patching commitments and bug bounties to quick dismissal and references to legal policies,” the researchers wrote. “Of the nineteen software suppliers with whom we engaged, seven used an outsourced platform for receiving vulnerability disclosures, six had dedicated web portals for vulnerability disclosures, four accepted disclosures via PGP-encrypted email, and two accepted disclosures only via non-PGP email. They all confirmed receipt of our disclosure, and ultimately nine of them committed to releasing a patch.”


Cloud, microservices, and data mess? Graph, ontology, and application fabric to the rescue.

Data integration may not sound as deliciously intriguing as AI or machine learning tidbits sprinkled on vanilla apps. Still, it is the bread and butter of many, the enabler of all cool things using data, and a premium use case for concepts underpinning AI, we argued back then. The key concepts we advocated for then have been widely recognized and adopted today in their knowledge graph and data fabric guise: federation and semantics. Back then, the concepts were not as widely adopted, and parts of the technology were less mature and recognized. Today, knowledge graphs and data fabrics are top of mind; just check the latest Gartner reports. The reason we're revisiting that old story is not to bask in some "told you so" self-righteousness, but to add to it. Knowledge graphs and data fabrics can, and hopefully will, eventually, address data integration issues. ... The final part of the process is orchestrating services, i.e. executing, coordinating, and deploying them, in the right order and with the right parameters, wherever they may be - on-premises, in the cloud, or in containers. That creates what Duggal called an "application fabric", as an extension of the notion of a data fabric.


Chaos Engineering Made Simple

The shift toward cloud native technologies has enabled the development of more manageable, scalable and dependable applications, but at the same time it has brought about unprecedented dynamism to critical services. This is due to the multitude of coexisting cloud native components that have to be managed individually. Failure of even a single microservice can lead to a cascading failure of other services, which can cause the entire application deployment to collapse. ... LitmusChaos was created with the primary goal of performing chaos engineering in a cloud native manner, scaling it as per the cloud native norms, managing the life cycle of chaos workflows and defining observability from a cloud native perspective. Chaos experiments help achieve this goal by injecting chaos into the target resources, using simple, declarative manifests. These Kubernetes custom resource (CR) manifests allow for an experiment to be flexibly fine-tuned to produce the desired chaos effect, as well as contain the experiment blast radius so as to not harm other resources in the environment. 


Future of Blockchain: How Will It Revolutionize The World In 2022 & Beyond!

In an ever-evolving world, one of the most relevant use cases for blockchain right now is cryptocurrencies, and it is here to remain that way for some time. However, an even more exciting future is emerging in blockchain technology: non-fungible tokens (NFTs). NFTs are a revolutionary new way of buying and selling digital assets that represent real-world items. All NFTs are unique and can’t be replaced or swapped — they can only be purchased, sold, traded, or given away by the original owner/creator of that asset. NFTs could power a whole new wave of digital collectibles, from rare artwork to one-of-a-kind sneakers and accessories. They could also be used in place of items in video games or other virtual worlds. ... Blockchain could replace this system with a digital identity that is safe, secure, and easy to manage. Instead of proving who you are by recalling some personal, arbitrary piece of information that could potentially be guessed or stolen, your digital identity is based on the uniquely random set of numbers assigned to each user on a blockchain network.


Quantum computers: Eight ways quantum computing is going to change the world

For decades, researchers have tried to teach classical computers how to associate meaning with words to try and make sense of entire sentences. This is a huge challenge given the nature of language, which functions as an interactive network: rather than being the 'sum' of the meaning of each individual word, a sentence often has to be interpreted as a whole. And that's before even trying to account for sarcasm, humour or connotation. As a result, even state-of-the-art natural language processing (NLP) classical algorithms can still struggle to understand the meaning of basic sentences. But researchers are investigating whether quantum computers might be better suited to representing language as a network -- and, therefore, to processing it in a more intuitive way. The field is known as quantum natural language processing (QNLP), and is a key focus of Cambridge Quantum Computing (CQC). The company has already experimentally shown that sentences can be parameterised on quantum circuits, where word meanings can be embedded according to the grammatical structure of the sentence. 


Anomaly Detection Using ML.NET

As the name suggests, it is about finding what is abnormal from what you expect in your day-to-day life. It helps identify data points, observations, or events that deviate from the normal behavior of the dataset. There are now many distributed systems where monitoring their performance is required. A considerable amount of data and events pass through such systems. Anomaly detection gives possibilities to determine where the source of the problem is, which significantly reduces the time to rectify the fault. It also allows us to detect outliers and report them accordingly. These all applications have one common focus that I have mentioned earlier - outliers. These are cases where the data points are distant from the others, do not follow a particular pattern, or match known anomalies. Each of these data points can be useful for identifying these anomalies and responding correctly to them.



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye

Daily Tech Digest - November 01, 2021

Why Is It Good For IT Professionals to Learn Business Analytics?

Most IT professionals who want to broaden their horizons go for business analytics courses. It gives a boost to their career and opens up more job opportunities. It is because IT professionals are well versed with software development and they can link business challenges and provide technical solutions. The main task of a business analyst is to gather and analyze data and sort them according to the requirements of the business. Among other people who take up a job as a business analyst, IT professionals can solve problems, identify risks, and manage technology-related restrictions more efficiently. Therefore, learning business analytics can open more doors for IT professionals. Business Analytics requires hard skills along with soft skills. Business analysts must know how to analyze data trends and convey the information to others and help them apply it on the business side. A person with a strong IT background who understands the system, product, and tools work can learn business analytics to boost their career and enjoy the hybrid role.


How to develop a data governance framework

Rather than begin with a set of strict guidelines that launch across the entire organization at once, they should start with a framework applied to a certain data set that will be used by a specific department for a specific analytics project and then build out from there, according to Matt Sullivant, principal product manager at Alation. "As you're establishing your framework and trying to figure out what you want from the data governance, you have milestones along the way," he said. "You start off small with a small set of data and a small set of policies and then eventually you mature out to more robust processes and tackle additional data domains." Sullivant added that by starting small, there's a better chance of success, and by showing success on a small scale there's a better chance of both organizational leaders and potential end users of data seeing the value of a data governance framework. "A lot of quick wins show the value of a data governance program, and then you can expand from there," he said.


When low-code becomes high maintenance

Low-code offers the promise of rapid app creation on a massive scale but can easily become de-prioritised due to lack of understanding, internal capacity, or a skills gap within the team. This can often mean that objectives are either not aligned or are being missed, stifling return on investment. Just like any IT project, it’s crucial to take a step back before you begin building applications. Review your current systems and processes first, document any strengths and weaknesses, and define what success looks like to your business. These measures will ensure you know which areas require extra attention and resources, while providing clarity around application outcomes. ... The simple ‘drag and drop’ mindset that is associated with low-code tools means there is a temptation to jump into the build without first scoping the business requirements. This largely depends on asking the right questions. However, many organisations struggle to apply this clear thinking when developing apps. A step-by-step approach can help ensure positive outcomes. Think about who will be involved; what you would like to achieve; how you will get there; the barriers; and, the measurement of success. 


Multinational Police Force Arrests 12 Suspected Hackers

The suspected hackers are alleged to have had various roles in organized criminal organizations. They are believed responsible for dealing with initial access to networks, using multiple mechanisms to compromise IT networks, including brute force attacks, SQL injections, stolen credentials and phishing emails with malicious attachments. "Once on the network, some of these cyber actors would focus on moving laterally, deploying malware such as Trickbot, or post-exploitation frameworks such as Cobalt Strike or PowerShell Empire, to stay undetected and gain further access," Europol states. In addition, it is claimed that the criminals would lay undetected in the compromised system for months, looking for further weaknesses in the network before monetising the infection by deploying a ransomware, such as LockerGoga, MegaCortex and Dharma ransomware, among others. "The effects of the ransomware attacks were devastating as the criminals had had the time to explore the IT networks undetected. ..." Europol notes.


Doing the right deals

Deals don’t always produce value. PwC research has shown that 53% of all acquisitions underperformed their industry peers in terms of TSR. And as PwC’s 2019 report “Creating value beyond the deal” shows, the deals that deliver value don’t happen by accident. Success often includes a strong strategic fit, coupled with a clear plan for how that value will be realized. Armed with that knowledge, we set out to better understand the relationship between strategic fit and deal success. ... When we analyzed deal success versus stated strategic fit, we found that the stated strategic intent had little or no impact on value creation, with the logical exception of capability access deals. Whether a deal fits depends minimally on its aim. What matters is whether there is a capabilities fit between the buyer and the target. Indeed, there was little variance among the remaining four types of deals—product or category adjacency, geographic adjacency, consolidation, and diversification—which on average performed either neutrally or negatively from a value-generation perspective compared with the market.


The antidote to brand impersonation attacks is awareness

There is no silver bullet here and the best practices definitely apply. On a high level, I would say ensure the people in your organization are aware and are trained in their security awareness. I mention this first because it’s all about people. These same people work with brands and systems that need to be protected. The most common used attack route is still email and this expands to other communication channels and platforms. It seems obvious to start protecting these channels. Getting back to awareness, this is not just about people, it’s also about being aware of (unauthorized) usage of your organizations brand and to have protection and remediation measures in place when that brand gets abused in an impersonation attack. This might sound overwhelming, and in a way, it is. Similar to security, the work on brand impersonation protection is never entirely done. Can it be simplified? Well yes. Make a risk assessment and start with the first steps that deliver the best ROI on protection. In my view, security is a journey, even when it’s in a close to perfect state in any given moment.


CIO role: Why the first 90 days are crucial

The purpose of the 90-day plan isn’t to have everything sorted out on Day 1, but rather to provide guidelines and milestones for you to achieve. For example, you might set a 30-day goal to meet with all the senior leaders in the organization. The specifics can be hammered out after you’ve had time to assess who the key leaders are and how to connect with them. Your second 30 days (30-60 days) might entail getting to know the mid-level leaders or spending more time with your second-in-command in the IT division. The plan will guide you; the details will evolve as your 90 days elapse. ... The first 90 days are when initial impressions and expectations are created. Set the agenda in a dynamic and intelligent manner so you are seen as an active, engaged, and competent leader from Day 1. If you come out of the gate slowly or ineffectively – or worse, if you stumble badly, you’ll struggle to overcome that reputation. If you come out too aggressively, on the other hand, your peers will be wary and you’ll struggle to build trust. Either extreme will negatively impact your success trajectory.


How to choose an edge gateway

For organizations with significant IoT deployments, edge computing has emerged as an effective way to process sensor data closest to where it is created. Edge computing reduces the latency associated with moving data from a remote location to a centralized data center or to the cloud for analysis, slashes WAN bandwidth costs, and addresses security, data-privacy, and data-autonomy issues. On a more strategic level, edge computing fits into a private-cloud/public-cloud/multi-cloud architecture designed to enable new digital business opportunities. One big challenge of edge computing is figuring out what to do with all the different kinds of data being generated there. Some of the data is simply not relevant or important (temperature readings on a motor that is not overheating). Other data can be handled at the edge; this type of intermediate processing would be specific to that node and would be of a more pressing nature. The cloud is where organizations would apply AI and machine learning to large data sets in order to spot trends ... The fulcrum that balances the weight of raw data generated by OT-based sensors, actuators, and controllers with the IT requirement that only essential data be transmitted to the cloud is the edge gateway.


How to better design and construct digital transformation strategies for future business success

What we are witnessing now is the need to re-consider how end-to-end design is changing the best way to merge the ready-made services in the cloud from a hyperscale or SaaS provider with the telecoms world, while providing connectivity in a secure manner. How does this manifest itself during a transformation within an organisation, and, importantly, how can a business align its strategy to its implementation? The answer lies in having concise messaging built upon a clear strategy. ... With all of this in mind, non-functional designs as well as the functional elements are still crucial and cannot simply be left to the cloud provider, which is still what many businesses believe. Resilience of a service and recovery actions in the event of a failure need deep thought and consideration. Ideally these should be automated via robotic process automation, but for this to be a success, instrumentation of a service and event correlation are needed to truly determine where in the service chain an error has occurred.


Is Monolith Dead?

Monolith systems have the edge when it comes to simplicity. If the development process can somehow avoid turning it into a big ball of mud and if a monolith system (as defined above) can be broken into sub-systems such that each of these sub-systems is a complete unit in itself, and if these subsystems can be developed in a microservices style, we can get best of both worlds. This sub-system is nothing but a “Coarse-Grained Service”, a self-contained unit of the system. A coarse-grained service can be a single point of failure. By definition, it consists of significant sub-parts of a system and so its failure is highly undesirable. If a part of this coarse-grained service fails (which otherwise would have been a fine-grained service itself), it should take the necessary steps to mask the failure, recover from it and report it. However, the trouble begins when this coarse-grained service fails as a whole. Still, it is not the deal-breaker and if the right mechanism is in place for high availability (containerized, multi-zone, multi-region, stateless), there will be very bleak chances for it. 



Quote for the day:

"No man can stand on top because he is put there." -- H. H. Vreeland

Daily Tech Digest - October 31, 2021

Hackers Breach iOS 15, Windows 10, Google Chrome During Massive Cyber Security Onslaught

"The first thing to note is the in-group, out-group divide here," says Sam Curry, the chief security officer at Cybereason. Curry told me that there's a sense that China has the "critical mass and doesn't need to collaborate to innovate in hacking," in what he called a kind of U.S. versus them situation. Curry sees the Tianfu Cup, with the months of preparation that lead up to the almost theatrical on-stage reveal, as a show of force. "This is the cyber equivalent of flying planes over Taiwan," he says, adding the positive being that the exploits will be disclosed to the vendors. There are, of course, lots of positives about a hacking competition, such as the Tianfu Cup or Pwn2Own. "The security researchers involved in these schemes can be an addition to existing security teams and provide additional eyes on an organisation's products," George Papamargaritis, the managed security service director at Obrela Security Industries, says, "meaning bugs will be unearthed and disclosed before cybercriminals get a chance to discover them and exploit them maliciously."


SRE vs. SWE: Similarities and Differences

In general, SREs and SWEs are more different than they are similar. The main difference between the roles boils down to the fact that SREs are responsible first and foremost with maintaining reliability, while SWEs focus on designing software. Of course, those are overlapping roles to a certain extent. SWEs want the applications they design to be reliable, and SREs want the same thing. However, an SWE will typically weigh a variety of additional priorities when designing and writing software, such as the cost of deployment, how long it will take to write the application and how easy the application will be to update and maintain. These aren’t usually key considerations for SREs. The toolsets of SREs and SWEs also diverge in many ways. In addition to testing and observability tools, SREs frequently rely on tools that can perform tasks like chaos engineering. They also need incident response automation platforms, which helps manage the complex processes required to ensure efficient resolution of incidents.


The Biggest Gap in Kubernetes Storage Architecture?

Actually, commercial solutions aren’t better than open source solutions — not inherently anyway. A commercial enterprise storage solution could still be a poor fit for your specific project, require internal expertise, require significant customization, break easily and come with all the drawbacks of an open source solution. The difference is that where an open source solution is all but guaranteed to come with these headaches, a well-designed commercial enterprise storage solution won’t. It isn’t a matter of commercial versus open source, rather it’s good architecture versus bad architecture. Open source solutions aren’t designed from the ground up, making it much more difficult to guarantee an architecture that performs well and ultimately saves money. Commercial storage solutions, however, are. This raises the odds that it will feature an architecture that meets enterprise requirements. Ultimately, all this is to say that commercial storage solutions are a better fit for most Kubernetes users than open source ones, but that doesn’t mean you can skip the evaluation process.


MLOps vs. DevOps: Why data makes it different

All ML projects are software projects. If you peek under the hood of an ML-powered application, these days you will often find a repository of Python code. If you ask an engineer to show how they operate the application in production, they will likely show containers and operational dashboards — not unlike any other software service. Since software engineers manage to build ordinary software without experiencing as much pain as their counterparts in the ML department, it begs the question: Should we just start treating ML projects as software engineering projects as usual, maybe educating ML practitioners about the existing best practices? Let’s start by considering the job of a non-ML software engineer: writing traditional software deals with well-defined, narrowly-scoped inputs, which the engineer can exhaustively and cleanly model in the code. In effect, the engineer designs and builds the world wherein the software operates. In contrast, a defining feature of ML-powered applications is that they are directly exposed to a large amount of messy, real-world data that is too complex to be understood and modeled by hand.


How to Measure the Success of a Recommendation System?

Any predictive model or recommendation systems with no exception rely heavily on data. They make reliable recommendations based on the facts that they have. It’s only natural that the finest recommender systems come from organizations with large volumes of data, such as Google, Amazon, Netflix, or Spotify. To detect commonalities and suggest items, good recommender systems evaluate item data and client behavioral data. Machine learning thrives on data; the more data the system has, the better the results will be. Data is constantly changing, as are user preferences, and your business is constantly changing. That’s a lot of new information. Will your algorithm be able to keep up with the changes? Of course, real-time recommendations based on the most recent data are possible, but they are also more difficult to maintain. Batch processing, on the other hand, is easier to manage but does not reflect recent data changes. The recommender system should continue to improve as time goes on. Machine learning techniques assist the system in “learning” the patterns, but the system still requires instruction to give appropriate results.


An Introduction To Decision Trees and Predictive Analytics

Decision trees represent a connecting series of tests that branch off further and further down until a specific path matches a class or label. They’re kind of like a flowing chart of coin flips, if/else statements, or conditions that when met lead to an end result. Decision trees are incredibly useful for classification problems in machine learning because it allows data scientists to choose specific parameters to define their classifiers. So whether you’re presented with a price cutoff or target KPI value for your data, you have the ability to sort data at multiple levels and create accurate prediction models. ... Each model has its own sets of pros and cons and there are others to explore besides these four examples. Which one would you pick? In my opinion, the Gini model with a maximum depth of 3 gives us the best balance of good performance and highly accurate results. There are definitely situations where the highest accuracy or the fewest total decisions is preferred. As a data scientist, it’s up to you to choose which is more important for your project! ...


Five Ways Blockchain Technology Is Enhancing Cloud Storage

In Blockchain-based cloud storage, information is separated into numerous scrambled fragments, which are interlinked through a hashing capacity. These safe sections are conveyed across the network, and each fragment lives in a decentralized area. There are solid security arrangements like transaction records, encryption through private and public keys, and hashed blocks. It guarantees powerful protection from hackers. Thanks to the sophisticated 256-bit encryption, not even an advanced hacker can decrypt that data. In an impossible instance, suppose a hacker decodes the information. Even in such a situation, every decoding attempt will lead to a small section of information getting unscrambled and not the whole record. The outrageous security arrangements effectively fail all attempts of hackers, and hacking becomes a useless pursuit according to a business perspective. Another significant thing to consider is that the proprietors’ information is not stored on the hub. It assists proprietors to regain their privacy, and there are solid arrangements for load adjusting too.


Data Mesh Vs. Data Fabric: Understanding the Differences

According to Forrester’s Yuhanna, the key difference between the data mesh and the data fabric approach are in how APIs are accessed. “A data mesh is basically an API-driven [solution] for developers, unlike [data] fabric,” Yuhanna said. “[Data fabric] is the opposite of data mesh, where you’re writing code for the APIs to interface. On the other hand, data fabric is low-code, no-code, which means that the API integration is happening inside of the fabric without actually leveraging it directly, as opposed to data mesh.” For James Serra, who is a data platform architecture lead at EY (Earnst and Young) and previously was a big data and data warehousing solution architect at Microsoft, the difference between the two approaches lies in which users are accessing them. “A data fabric and a data mesh both provide an architecture to access data across multiple technologies and platforms, but a data fabric is technology-centric, while a data mesh focuses on organizational change,” Serra writes in a June blog post. “[A] data mesh is more about people and process than architecture, while a data fabric is an architectural approach that tackles the complexity of data and metadata in a smart way that works well together.”


Data Warehouse Automation and the Hybrid, Multi-Cloud

One trend among enterprises that move large, on-premises data warehouses to cloud infrastructure is to break up these systems into smaller units--for example, by subdividing them according to discrete business subject areas and/or practices. IT experts can use a DWA tool to accelerate this task--for example, by sub-dividing a complex enterprise data model into several subject-specific data marts, then using the DWA tool to instantiate these data marts as separate virtual data warehouse instances, or by using a DWA tool to create new tables that encapsulate different kinds of dimensional models and instantiating these in virtual data warehouse instances. In most cases, the DWA tool is able to use the APIs exposed by the PaaS data warehouse service to create a new virtual data warehouse instance or to make changes to an existing one. The tool populates each instance with data, replicates the necessary data engineering jobs and performs the rest of the operations in the migration checklist described above.


Rethinking IoT/OT Security to Mitigate Cyberthreats

We have seen destructive and rapidly spreading ransomware attacks, like NotPetya, cripple manufacturing and port operations around the globe. However, existing IT security solutions cannot solve those problems due to the lack of standardized network protocols for such devices and the inability to certify device-specific products and deploy them without impacting critical operations. So, what exactly is the solution? What do people need to do to resolve the IoT security problem? Working to solve this problem is why Microsoft has joined industry partners to create the Open Source Security Foundation as well as acquired IoT/OT security leader CyberX. This integration between CyberX’s IoT/OT-aware behavioral analytics platform and Azure unlocks the potential of unified security across converged IT and industrial networks. And, as a complement to the embedded, proactive IoT device security of Microsoft Azure Sphere, CyberX IoT/OT provides monitoring and threat detection for devices that have not yet upgraded to Azure Sphere security.



Quote for the day:

"It's hard for me to answer a question from someone who really doesn't care about the answer." -- Charles Grodin