Daily Tech Digest - October 20, 2021

The challenges of cloud data management

IT departments are facing a growing challenge to stay abreast of advancements in cloud technologies, provide day-to-day support for increasingly complex systems, and adhere to ever-changing regulatory requirements. In addition, they must ensure the systems they support are able to scale to meet performance objectives and are secured against unauthorized access. ... Much like data security, adhering to regulatory compliance frameworks is a shared responsibility between the customer and cloud provider. Larger cloud vendors will provide third-party auditor compliance reports and attestations for the regulatory frameworks they support. It will be up to each organization to read the documentation and ensure the contents meet specific compliance needs. Most leading platforms will also provide tools to help clients configure identity and access management, secure and monitor their data, and implement audit trails. But the responsibility for ensuring the tools' configuration and usage meet the framework's control objectives relies solely with the customer. ... We know one of IT's core responsibilities is to transform raw data into actionable insights.


Learning to learn: will machines acquire knowledge as naturally as children do?

We create new-to-the-world machines, with sophisticated specifications, that are hugely capable. But to reach their potential, we have to expose them to hundreds of thousands of training examples for every single task. They just don’t ‘get’ things like humans do. One way to get machines to learn more naturally, is to help them to learn from limited data. We can use generative adversarial networks (GANs) to create new examples from a small core of training data rather than having to capture every situation in the real world. It is ‘adversarial’ because one neural network is pitted against another to generate new synthetic data. Then there’s synthetic data rendering – using gaming engines or computer graphics to render new scenarios. Finally, there are algorithmic techniques such as Domain Adaption which involves transferable knowledge (using data in summer that you have collected in winter, for example) or Few Shot Learning, which making predictions from a limited number of samples. Taking a different limited-data route is multi-task Learning, where commonalities and differences are exploited to solve multiple tasks simultaneously.


IT hiring: 5 signs of a continuous learner

Whatever you call it, it’s an important attribute to consider when hiring or grooming the most capable IT professionals today. A continuous learner can offer more bang for the buck in one of the strongest job markets in recent years. “We have found that many companies, while their job descriptions state they are looking for a certain number of years of experience in a laundry list of technologies, are being more flexible and hiring candidates that may be more junior, or those who lack a few main technologies,” Spathis says, noting that many organizations are willing to take the risk on more junior or less specifically experienced candidates who are eager, trainable, and able to learn new skills. There’s definite agreement on the demand for continuous learners in the IT function today. “To thrive during these changing times, it’s imperative that IT organizations continuously grow and change with changing needs,” says Dr. Sunni Lampasso, executive coach and founder of Shaping Success. “As a result, IT organizations that employ continuous learners are better equipped to navigate the changing work world and meet changing demands.”


Ethical and Productivity Implications of Intelligent Code Creation

AI technology is changing the working process of software engineers and test engineers. It is promoting productivity, quality, and speed. Businesses use AI algorithms to improve everything from project planning and estimation to quality testing and the user experience. Application development continues to evolve in its sophistication, while the business increasingly expects solutions to be delivered faster than ever. Most of the time, organizations have to deal with challenging problems like errors, defects, and other complexities while developing complex software. Development and Testing teams no longer have the luxury of time when monthly product launches were the gold standard. Instead, today’s enterprises demand weekly releases and updates that trickle in even more frequently. This is where self-coded applications come into play. Applications that generate the code themselves help the programmers accomplish a task in less time and increase their programming ability. Artificial intelligence is the result of coding, but now coding is the result of Artificial intelligence. It is now helping almost every sector of the business and coders to enhance the software development process. 


How To Transition From Data Analyst To Data Scientist

Before even thinking about making the transition, one has to be very clear about what a data scientist does and introspect what has to be done to fill the gaps that are needed to make the transition and the skills the person has now. A data scientist not only handles data but provides much deeper insights from it. Other than gaining the right mathematical and statistical know-how, training yourself to look at business problems with the mindset of a data scientist and not just like a data analyst will be of great help. This means that while looking into a problem, developing your critical thinking and analytical skills, getting deep into the problem to be solved at hand, and coming up with the right way to approach the solution will train you for the future. A data analyst might not have great coding skills but surely has to know it well. Data scientists use tools like R and Python to derive interpretations from the massive data sets they handle. As a data analyst, if you are not great at coding or don’t know the common tools, it would be wise to start taking basic courses on them and use them then in real-world applications.


Application Security Manager: Developer or Security Officer?

First, an ASM has to understand what a supervised project is about. This is especially important for agile development, where, unlike the waterfall model, you don’t have two months to perform a pre-release review. An АSМ’s job is to make sure that the requirements set at the design stage are correctly interpreted by the team, properly adopted in the architecture, are generally feasible, and will not cause serious technical problems in the future. Typically, the ASM is the main person who reads, interprets, and assesses automated reports and third-party audits. ... Second, an ASM should know about various domains, including development processes and information security principles. Hard skills are also important because it’s very difficult to assess the results provided by narrow specialists and automated tools if you can’t read the code and don’t understand how vulnerabilities can be exploited. When a code analysis or penetration test reveals a critical vulnerability, it’s quite common for developers (who are also committed to creating a secure system) to not accept the results and claim that auditors failed to exploit the vulnerability. 


Top Open Source Security Tools

WhiteSource detects all vulnerable open source components, including transitive dependencies, in more than 200 programming languages. It matches reported vulnerabilities to the open source libraries in code, reducing the number of alerts. With more than 270 million open source components and 13 billion files, its vulnerability database continuously monitors multiple resources and a wide range of security advisories and issue trackers. WhiteSource is also a CVE Numbering Authority, which allows it to responsibly disclose new security vulnerabilities found through its own research. ... Black Duck software composition analysis (SCA) by Synopsys helps teams manage the security, quality, and license compliance risks that come from the use of open source and third-party code in applications and containers. It integrates with build tools like Maven and Gradle to track declared and transitive open source dependencies in applications’ built-in languages like Java and C#. It maps string, file, and directory information to the Black Duck KnowledgeBase to identify open source and third-party components in applications built using languages like C and C++. 


Why You Don't Need to Be a Business Insider in Order to Succeed

No matter what anyone tells you, it’s not a zero-sum game. There is abundance out there for everyone. Of course, money becomes concentrated with various people, but wealth-mobility is very real and happening all the time. We hear people talk about the 1% all the time (often in an effort to paint them as a monolithic, evil, controlling class). What they fail to recognize is that people are constantly moving in and out of the 1% all the time. Some of this is down to inherited wealth, and some is down to hard work — but it’s happening all the time. What really lies at the heart of this is fear. We abdicate our power to an imagined ruling class because we’re afraid of the unknown. And before you think this is about blaming you: It is our subconscious being unwilling to take the risk that stops us. You have a built-in stowaway in your mind who wants to maintain a status quo. Therefore, any new growth opportunities — while intellectually exciting and appealing — will be met with emotional resistance at some point. I’m sure you’ve had this happen to you before: You get a new career-changing offer, you do a little dance and head off to celebrate. 


Why a new approach to eDiscovery is needed to decrease corporate risk

For businesses, the combination of these factors has led to a big increase in corporate risk, putting significant pressure on any corporate investigations that need to be conducted and making the eDiscovery process much more difficult. Not only are employees and their devices a lot less accessible than they used to be, but the growing use of personal devices, many of which lack proper security protocols or use unsecured networks, leaves company data much more vulnerable to theft or loss. If that wasn’t enough, heightened privacy concerns and the likelihood that personal data will be unintentionally swept up in any eDiscovery processes can make employees even more reluctant to hand over their devices to investigators if/when needed (if investigators can even get hold of them). As a result, many companies are suddenly finding themselves between a rock and a hard place. How can they operate a more employee friendly hybrid working model while still maintaining the ability to carry out corporate investigations and eDiscovery in the event it’s required?


Three key areas CIOs should focus on to generate value

CIOs and IT executives should focus on three types of partner connections: one-to-one, one-to-many and many-to-many. A one-to-one connection can be taken to the next level and become a generative partnership where the enterprise and technology partner work together to create and build a solution that doesn’t currently exist. The resulting assets are co-owned and produce benefits and revenue for both partners. Generative partnerships are becoming more common. In fact, Gartner forecasts that generative-based IT spending will grow at 31% over the next five years. Beyond one-to-one connections is the formation of ecosystems of multiple partners. One-to-many partnerships work best when a single enterprise needs to focus many players on jointly solving a single problem – such as a city bringing together public and private entities to serve the citizen. Many-to-many partnerships are created when a platform brings many different enterprises’ products and services together, to be offered to many different customers. Often called platform business models, these marketplaces and app/API stores enable the many to help the many at ecosystem scale.



Quote for the day:

"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis

Daily Tech Digest - October 19, 2021

Micro Frontend Architecture

The idea behind Micro Frontends is to think about a web app as a composition of features that are owned by independent teams. Each team has a distinct area of business it cares about and specializes in. A team is cross-functional and develops its features end-to-end, from database to user interface. ... But why do we need micro frontends? Let’s find out. In the Modern Era, with new web apps, the front end is becoming bigger and bigger, and the back end is getting less important. Most of the code is the Micro Frontend Architecture and the Monolith approach doesn’t work for a larger web application. There needs to be a tool for breaking it up into smaller modules that act independently. The solution to the problem is the Micro frontend. ... It heavily depends on your business case, whether you should or should not use micro frontends. If you have a small project and team, micro frontend architecture is not as such required. At the same time, large projects with distributed teams and a large number of requests benefit a lot from building micro frontend applications. That is why today, micro frontend architecture is widely used by many large companies, and that is why you should opt for it too.


CodeSee Helps Developers ‘Understand the Codebase’

As a developer, you’ve likely faced one problem again and again throughout your career: struggling to understand a new codebase. Whether it’s a lack of documentation, or simply poorly-written and confusing code, working to understand a codebase can take a lot of time and effort, but CodeSee aims to help developers not only gain an initial understanding, but to continually understand large codebases as they evolve over time. “We really are trying to help developers to master the understanding of codebases. We do that by visualizing their code, because we think that a picture is really worth a thousand words, a thousand lines of code,” said CodeSee CEO and co-founder Shanea Leven. “What we’re trying to do is really ensure that developers, with all of the code that we have to manage out there — and our codebases have grown exponentially over the past decade — that we can deeply understand how our code works in an instant.” Earlier this month, CodeSee, which is still in beta, launched OSS Port to bring its code visibility and “continuous understanding” product to open source projects, as well as give potential contributors and maintainers a way to find their next project.


Non-Coder to Data Scientist! 5 Inspiring Stories and Valuable Lessons

While looking for inspiring journeys I focus on people coming from a non-traditional background. People coming from non-technology backgrounds. People having zero coding experience. I guess this makes their story inspiring. All those who found their success in data science were willing to learn to code. They were not intimidated by the Kaggle notebooks that they were not able to understand initially. They all understood that it takes time to gain knowledge and pursued till they acquired all the required knowledge. Programming is one of the biggest show stoppers. It is this particular skill that makes many frustrated. It even makes them give up their passion for a career in data science. Programming is not exactly a hard thing to learn. ... Having a growth mindset plays a major role in data science. There are many topics to learn and it can be overwhelming. Instead of saying, I can’t learn math, I can’t be a good programmer, I can never understand statistics. People with a growth mindset tend to stay positive and keep trying. 


How To Stay Ahead of the Competition as an Average Programmer

Apart from getting the satisfaction of being helpful, it has multiple career benefits too. One, I get to learn a lot more by helping others. Two, continuously helping others builds trusted relationships within the organization.In the software industry, your allies come to your help more than you realize. They can return the favor during application integration, defect resolution, challenging meetings, or even in promotion discussions. If you know people and have helped them before, they will be happy to bail you out from difficult situations. Hence, never hesitate to help others at your workplace. ... Simultaneously, it might not be possible for you to be of help to everyone. But you can justify why you are unable to help. Being arrogant or repeatedly rejecting the requests as not your responsibility makes others think you are not a team player. ... While working in a team environment, you are bound to face challenges. You need to follow company policies and processes that you might find hindering your productivity. You will have to work with people who slow down the team’s progress due to their poor contribution.


A real-world introduction to event-driven architecture

An event-driven architecture eliminates the need for a consumer to poll for updates; it instead receives notifications whenever an event of interest occurs. Decoupling the event producer and consumer components is also advantageous to scalability because it separates the communication logic and business logic. A publisher can avoid bottlenecks and remain unaffected if its subscribers go offline, or if their consumption slows down. If any subscriber has trouble keeping up with the rate of events, the event stream records them for future retrieval. The publisher can continue to pump out notifications without throughput limitations and with high resilience to failure. Using a broker means that a publisher does not know its subscribers and is unaffected if the number of interested parties scales up. Publishing to the broker offers the event producer the opportunity to deliver notifications to a range of consumers across different devices and platforms. Estimates suggest that 30% of all global data consumed by 2025 will result from information exchange in realtime.


Pros and cons of cloud infrastructure types and strategies

A multi-cloud strategy simply means that an organisation has chosen to use multiple public cloud providers to host their environments. A hybrid cloud approach means that a company is using a combination of on-premises infrastructure, private cloud and public cloud — and possibly more than one of the latter, meaning that company would be implementing a multi-cloud strategy with a hybrid approach. At times, these terms are used interchangeably. Companies choose a multi-cloud strategy for a multitude of reasons, not least of which is avoiding vendor lock-in. Spreading workloads across multiple cloud providers increases reliability, as a company is able to fail over to a secondary provider if another provider experiences an outage. Optionality is a huge benefit to companies who want to be able to pick and choose which services will most seamlessly integrate into their environments, as each major public cloud provider provides some unique services for different types of workloads. Furthermore, when a company uses multiple public cloud providers, it retains flexibility and can transfer workloads from one provider to another.


Gartner: Top strategic technology trends for 2022

The first of those trends is the growth of the distributed enterprise. Driven by the massive growth in remote and hybrid working patterns, traditional office-centric organizations are evolving into geographically distributed enterprises. “For every organization, from retail to education, their delivery model has to be reconfigured to embrace distributed services,” Groombridge said. Such operations will stress the network that supports users and consumers alike, and businesses will need to rearchitect and redesign to handle it. ... “Data is widely scattered in many organization and some of that valuable data can be trapped in siloes,” Groombridge said. “Data fabrics can provide integration and interconnectivity between multiple silos to unlock those resources.” Groombridge added that data-fabric deployments will also force significant network-topology readjustments and in some cases, to work effectively, could require their own edge-networking capabilities . The result is that the fabric will unlock data that can be used by AI and analytics platforms to support new applications bring about business innovations more quickly, Groombridge said. 


BlackMatter Ransomware Defense: Just-In-Time Admin Access

To be fair, the BlackMatter alert, beyond including intrusion system rules, also details the group's known tactics, techniques and procedures, and includes additional recommended defenses, such as implementing "time-based access for accounts set at the admin-level and higher," due to ransomware-wielding attackers' propensity to attack organizations after hours, over weekends, on Christmas Eve or any other inconvenient time. What does time-based access look like? One approach is just-in-time access, which enforces least-privileged access except for temporarily granting higher access levels via Active Directory. "This is a process where a network-wide policy is set in place to automatically disable admin accounts at the AD level when the account is not in direct need," according to the advisory. "When the account is needed, individual users submit their requests through an automated process that enables access to a system, but only for a set timeframe to support task completion."


Why is collaboration between the CISO and the C-suite so hard to achieve?

Poor communication between the CISO and business unit heads is a major barrier to safe and successful business transformation. To properly educate people within the organisation about the realities of a cyber attack, the CISO must move beyond data, buzzwords and technical jargon and tell a story that brings the threat to life for those without subject-matter insight. If the CISO can intelligibly and clearly articulate the threats and the steps necessary to mitigate them, they are much more likely to capture executives’ attention and help ensure that all key stakeholders understand the trade-offs between new technology and added risk. If they’re able to adapt their language to specific individuals and business functions, they’ll have even greater success. For instance, a chief marketing officer is most likely interested in the risks to customer data, while chief financial officers will want to better understand how to secure banking information. ... “CISOs still have more work to do in breaking down the communication barriers by talking in less technical language for boards to better understand potential business risks.”


Developer Learning isn’t just Important, it’s Imperative

Every company that isn’t consistently upgrading its codebase or shifting to new frameworks is facing a serious business problem. If your codebase is getting older and older, you face the risk of massive future migrations. And if you’re not moving to new framework versions, you’re missing important benefits that your team could otherwise leverage. Technical debt naturally increases over time. The longer it goes unaddressed, the sooner you’ll get stuck paying high costs in migration, hiring, or massive upskilling efforts that take weeks or months. Like saving for retirement, incremental upskilling pays dividends in the long run. Every industry leader I’ve talked to worries about the scarcity of high-quality software engineers. That means companies feel serious pressure to constantly hire new, better developers. But rather than looking externally for a solution, what if companies looked internally Here’s the reality: meaningful developer learning helps companies convert silver medalists into gold medalists.



Quote for the day:

"A true dreamer is one who knows how to navigate in the dark" -- John Paul Warren

Daily Tech Digest - October 18, 2021

Magnanimous machines: Why AI work should work for people and not the other way around

The consolidation of power amongst a Big Tech elite fused with state intelligence grows ever stronger. These entities can know everything about us, yet carefully hide their own clandestine obfuscated activities. The best defence to this asymmetry is radical, mandated, and cryptographically secure transparency. To illustrate, in commercial aircraft, we have two essential data recorders (‘black boxes’). One monitors the aircraft itself, and the other monitors cockpit chatter. Both recorders are necessary to understand why an incident has occurred. For the same reasons, we need a similar approach to humane technology. Transparency is the foundation upon which every other aspect of ethical technology rests. It is essential to understand a system, its functions, as well as attributes of the organisation, and not to forget, those who steer it. Through transparency, we can understand that the incentive structures within organisations are aligned towards producing honest good faith outcomes. We can understand what may have gone wrong and how to fix it in the future.


8 Keys to Failproof App Modernization

Typically, modernization initiatives are strategized or rolled out before major events or milestones like data center contracts and vendor contracts coming up for renewal, Software and hardware platforms going End of Service and Support Life, government-imposed deadlines to implement regulatory and compliance requirements, ageing workforce and the risk of shortage of skills. In all such scenarios, since the accumulated technical debt is so high, these become multi-year, multi-million modernization programs. Risks are equally higher in such large programs. And to optimize costs and minimize risks, the temptation sometimes is to somehow get these workloads to the target platform [containerize or rehost without really changing the underlying architecture]. This will result in more technical debt and will necessitate another modernization initiative in a few years and so it goes. The chances of success are much higher if the initiatives are incremental in nature and time-bound say 3-6 months. In fact, it is a recommended practice in agile development practices to pay down technical debt regularly every single sprint.


Three key issues to tackle before smart cities become a reality

Many smart systems require data to be validated and assimilated in real-time for it to be relevant. This poses a problem, in that it requires every citizen to agree to their data being collected and shared, which in turn requires trust. That means the collection of data and its use to influence critical decisions in smart cities, needs careful consideration. However, citizens often worry about being ‘tracked’ – a difficult perception to eradicate in a world where privacy and security are among the biggest challenges each of us faces. Overcoming it requires us to build a comprehensive data privacy and security strategy into any smart city development, with local governments then responsible for educating individuals and society on how their data will be stored, who has access, and how it can be used. Such strategies require careful consideration, as any mistakes that harm public trust could impact the success of smart cities. The NHS COVID-19 app is a good example of this – once people lacked trust in the application, it took only a matter of days for thousands of people to delete it. 


Engineering Digital Transformation for Continuous Improvement

Getting organizations to invest in improvements and embrace new ways of working is a challenge. They don’t just need the right technical solutions, they also need to address the organizational change management challenges that are creating resistance to new ways of working. Organizations frequently have champions that have ideas for improvement and are trying to influence change without a lot of success. These champions find that the harder they push for change, the more people resist. We can have all the best approaches in the world, but if we can’t figure out how to overcome this resistance, organizations will never adopt them and realize the benefits. While pushing for change is the natural approach, research by organizational change management experts, like Professor Jonah Bergerin his book “The Catalyst,” suggests this is the wrong approach. His research shows that the harder you push for change, the more people resist. Whenever they feel like they are trying to be influenced, their anti-persuasion radar kicks in and instead they start shooting down ideas and resisting the change being offered.


A transactional approach to power

As Battilana and Casciaro tell it, it’s not your personal or positional power that determines your effectiveness in any given situation. It is your ability to understand what resources the involved parties want and how the resources are distributed—that is, the balance of power. “We find this extremely compelling,” explains Casciaro, “because it brings power relationships—whether they are interpersonal, intergroup, interorganizational, or international—down to four simple factors.” Taking this a step further, the ability to shift the balance of power within a situation determines your success at exercising power. Battilana and Casciaro find there are several key strategies that support this ability to rebalance power. If you have resources the other party values, attraction is a key strategy. You try to increase the value of those resources for the other party. Personal and corporate brand-building are organized around this strategy. If the other party has too many paths to access your resources, consolidation is a key strategy. You try to eliminate or otherwise lessen the alternatives. Employees join unions to limit the alternatives of employers and increase their power.


Treasury Dept. to Crypto Companies: Comply with Sanctions

The announcement is the latest in a series of moves from the Biden administration to combat ransomware, following high-profile attacks this year that have disrupted the East Coast's fuel supply during the Colonial Pipeline incident; jeopardized the nation's meat supply by attacking JBS USA; and knocking some 1,500 downstream organizations offline by zeroing in on managed service provider, Kaseya, over the July Fourth holiday. Last month, the Treasury Department blacklisted Russia-based cryptocurrency exchange, Suex, for allegedly laundering tens of millions of dollars for ransomware operators, scammers and darknet markets. In its latest issuance, the department alleges that over 40% of Suex’s transaction history had been associated with illicit actors, involving the proceeds from at least eight ransomware variants. Similarly, this week, the White House National Security Council facilitated a 30-nation, two-day "counter-ransomware" event, which found senior officials strategizing on ways to improve network resiliency, addressing illicit cryptocurrency usage, and ways to heighten law enforcement collaboration and diplomacy. 


DevSecOps: 11 questions to ask about your security strategy now

Where does friction exist between security and business goals? The question is relatively self-explanatory: DevSecOps exists in part to remove friction and bottlenecks that have historically introduced risks rather than reduce them. The question also has a subtext: What are we doing about it? This friction often goes unaddressed because, well, it’s unaddressed – as in, people avoid pointing it out or talking about it, whether because of poor relationships, fear factors, cultural acceptance, or other reasons. Leaders need to take an active role here by showing their willingness to talk about it, without finger-pointing or other toxic behaviors. “Leaders should constantly be probing and trying to understand the friction points between the business and DevSecOps,” says Jerry Gamblin, director of security research at Kenna Security, now part of Cisco. “These often uncomfortable conversations will help you refocus your team’s goals on the company’s goals.” A willingness to have those uncomfortable conversations as a pathway to positive long-term change is a key characteristic of a healthy culture.


The importance of crisis management in the age of ransomware

How to prepare for ransomware attacks is an often-asked question. From my point of view, the best action is to go through the checklist of security controls that prevent hackers from taking control of your network. Organizations like Servadus offer a Ransomware Readiness Assessment which helps organizational leadership identify current risks to the corporation. Of course, having up-to-date incident response and business continuity plans are part of that assessment. Outside, the real value comes from remediating weak cybersecurity controls. Additionally, organizations implement a framework to shore security control implementation and sustainability. Many organizations try to maintain compliance and security controls but are vulnerable to attacks 3 to 6 months after validating security in channels in place. The long-term strategy is about validating sustainable security controls. The service framework also allows organizations to evaluate threats to the organization and vulnerabilities of the system software in use. 


The importance of staff diversity when it comes to information security

The information security team’s decision-making is diversified, which contributes to the organisation’s overall strength. The fact that each employee provides an unique perspective to the problem makes it simpler to recognise and address hidden vulnerabilities in the security operations, as well as to identify and correct the deficiencies of other employees, helping them to grow in their own areas of expertise. Consider the likelihood of a breach, as well as the “red team” that will be assigned to deal with the situation. SOC analyst looks for the logs, security engineer looks for the vulnerability and some other team members look for the defensive approach to defend against the vulnerability. As a result, in order to maintain a strong information security team, it is critical to have a diverse workforce. It facilitates task efficiency while encouraging alternative viewpoints. ... An organisation’s security cannot be handled by a single product, and in order to maintain and handle those security products, companies require a large number of employees. So, in order to work efficiently and effectively without being reliant on a single person or product, this should be common knowledge.


Is it right or productive to watch workers?

The recent rise in employee surveillance accelerated during the pandemic, largely because it had to, but the bottom line is that we are now more than ever accustomed to being watched. We accept the intrusion of cameras in novel spaces under the promise of increased safety; doorbell cameras spring to mind, but so too do webcams and smartphones; we accept data tracking to prove we’re “not a robot” on websites; we accept that our information, our clicks, and our preferences are observed and noted. We seem to be primed now to accept that companies have a reasonable expectation to protect their own safety, so to speak, by monitoring us. One recent survey by media researcher Clutch of 400 US workers found that only 22% of 18- to 34-year-old employees were concerned about their employers having access to their personal information and activity from their work computers. Meanwhile, in a pre-pandemic survey of US workers by US media group Axios from August 2019, 62% of respondents agreed that employers should be able to use technology to monitor employees.



Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor

Daily Tech Digest - October 17, 2021

Multi-User IP Address Detection

When an Internet user visits a website, the underlying TCP stack opens a number of connections in order to send and receive data from remote servers. Each connection is identified by a 4-tuple (source IP, source port, destination IP, destination port). Repeating requests from the same web client will likely be mapped to the same source port, so the number of distinct source ports can serve as a good indication of the number of distinct client applications. By counting the number of open source ports for a given IP address, you can estimate whether this address is shared by multiple users. User agents provide device-reported information about themselves such as browser and operating system versions. For multi-user IP detection, you can count the number of distinct user agents in requests from a given IP. To avoid overcounting web clients per device, you can exclude requests that are identified as triggered by bots and we only count requests from user agents that are used by web browsers. There are some tradeoffs to this approach: some users may use multiple web browsers and some other users may have exactly the same user agent. 


Critical infrastructure security dubbed 'abysmal' by researchers

"While nation-state actors have an abundance of tools, time, and resources, other threat actors primarily rely on the internet to select targets and identify their vulnerabilities," the team notes. "While most ICSs have some level of cybersecurity measures in place, human error is one of the leading reasons due to which threat actors are still able to compromise them time and again." Some of the most common issues allowing initial access cited in the report include weak or default credentials, outdated or unpatched software vulnerable to bug exploitation, credential leaks caused by third parties, shadow IT, and the leak of source code. After conducting web scans for vulnerable ICSs, the team says that "hundreds" of vulnerable endpoints were found. ... Software accessible with default manufacturer credentials allowed the team to access the water supply management platform. Attackers could have tampered with water supply calibration, stop water treatments, and manipulate the chemical composition of water supplies.


What is a USB security key, and how do you use it?

There are some potential drawbacks to using a hardware security key. First of all, you could lose it. While security keys provide a substantial increase in security, they also provide a substantial increase in responsibility. Losing a security key can result in a serious headache. Most major websites suggest that you set up backup 2FA methods when enrolling a USB security key, but there's always a small but real chance that you could permanently lose access to a specific account if you lose your key. Security-key makers suggest buying more than one key to avoid this situation, but that can quickly get expensive. Cost is another issue. A hardware security key is the only major 2FA method for which you have to spend money. You can get a basic U2F/WebAuthn security key standards for $15, but some websites and workplaces require specialized protocols for which compatible keys can cost up to $85 each. Finally, limited usability is also a factor. Not every site supports USB security keys. If you're hoping to use a security key on every site for which you have an account, you're guaranteed to come across at least a few that won't accept your security key.


Future-proofing the organization the ‘helix’ way

The leaders need a high level of domain expertise, obviously, but other skills as well. As capability managers, these leaders must excel at strategic workforce management, for example—not short-sighted resource attribution for the products at hand, but the strategic foresight and long-term perspective to understand what the workload will be today, tomorrow, three to five years from now. They need to understand what skills they don’t have in-house and must acquire or build. These leaders become supply-and-demand managers of competence. They must also be excellent—and rigid—portfolio managers who make their resource decisions in line with the overall transformation. The R&D organization, for example, cannot start research projects inside a product line whose products are classified as “quick return,” even if they have people idle. It’s a different mindset. In fact, R&D leaders don’t necessarily have to be the best technologists in order to be successful. They must be farsighted and able to anticipate trends—including technological trends—but ultimately what matters is their ability to build the department in a way that ensures it’s ready to carry the demands of the organization going forward.


Robots Will Replace Our Brains

Over the years, despite numerous fruitless attempts, no one has come close to achieving the recreation of this organ with all its intricate details; it is challenging to fathom such an invention in the scientific world at this point, considering the discoveries that surface every other day. As one research director mentions, we are very good at gathering data and developing algorithms to reason with that data. Nevertheless, that reasoning is only as sound as the data, one step removed from reality for the AI we have now. For instance, all science fiction movies conceive movies that depict only a mere thin line that separates human intelligence from artificial intelligence. ... A new superconducting switch is being constructed by researchers at the U.S. National Institute of Standards and Technology (NIST) and updated that will soon enable computers to analyze and make decisions just like humans do. The conclusive goal is to integrate this switch into everyday life; from transportation to medicine, this invention also contains an artificial synapse, processes electrical signals just like a biological synapse does, and converts it to an adequate output, just like the brain does.


Data Storage Strategies - Simplified to Facilitate Quick Retrieval of Data and Security

No matter what the reason for the downtime, it may be very costly. An efficient data strategy goes beyond just deciding where data will be kept on a server. When it comes to disaster recovery, hardware failure, or a human mistake, it must contain methods for backing up the data and ensuring that it is simple and fast to restore. Putting in place a disaster recovery plan, although it is a good start and guarantees that data and the related systems are available after a minimum of disruption is experienced. Cloud-based disaster recovery, as well as virtualization, are now required components of every disaster recovery strategy. They can work together to assure you that no customer will ever experience more downtime than they can afford at any given moment. By relying only on the cloud storage service, the company can outsource the storage issue. By using online data storage, the business can minimize the costs associated with internal resources. With this technology, the business does not need any internal resources or assistance to manage and keep their data; the Data warehousing consulting services provider takes care of everything. 


RISC-V: The Next Revolution in the Open Hardware Movement

You could always build your own proprietary software and be better than your competitors, but the world has changed. Now almost everyone is standing on the shoulders of giants. When you need an operating system kernel for a new project, you can use Linux directly. No need to recreate a kernel from scratch, and you can also modify it for your own purpose (or write your drivers). You’ll be certain to rely on a broadly tested product because you are just one of a million users doing the same. That would be exactly what relying on an open source CPU architecture could provide. No need to design things from scratch; you can innovate on top of the existing work and focus on what really matters to you, which is the value you are adding. At the end of the day, it means lowering the barriers to innovate. Obviously, not everyone is able to design an entire CPU from scratch, and that’s the point: You can bring only what you need or even just enjoy new capabilities provided by the community, exactly the same way you do with open source software, from the kernel to languages.


The Conundrum Of User Data Deletion From ML Models

As the name says, approximation deletion enables us to eliminate the majority of the implicit data associated with users from the model. They are ‘forgotten,’ but only in the sense that our models can be retrained at a more opportune time. Approximate deletion is particularly useful for rapidly removing sensitive information or unique features associated with a particular individual that could be used for identification in the future while deferring computationally intensive full model retraining to times of lower computational demand. Approximate deletion can even accomplish the exact deletion of a user’s implicit data from the trained model under certain assumptions. The deletion challenge has been tackled differently by researchers than by their counterparts in the field. Additionally, the researchers describe a novel approximate deletion technique for linear and logistic models that is feature-dimensionally linear and independent of training data. This is a significant improvement over conventional systems, which are superlinearly dependent on the extent at all times.


9 reasons why you’ll never become a Data Scientist

Have you ever invested an entire weekend in a geeky project? Have you ever spent your nights browsing GitHub while your friends were out to party? Have you ever said no to doing your favorite hobby because you’d rather code? If you could answer none of the above with yes, you’re not passionate enough. Data Science is about facing really hard problems and sticking at them until you found a solution. If you’re not passionate enough, you’ll shy away at the sight of the first difficulty. Think about what attracts you to becoming a Data Scientist. Is it the glamorous job title? Or is it the prospect of plowing through tons of data on the search for insights? If it is the latter, you’re heading in the right direction. ... Only crazy ideas are good ideas. And as a Data Scientist, you’ll need plenty of those. Not only will you need to be open to unexpected results — they occur a lot! But you’ll also have to develop solutions to really hard problems. This requires a level of extraordinary that you can’t accomplish with normal ideas. 


Why Don't Developers Write More Tests?

If deadlines are tight or the team leaders aren’t especially committed to testing, it is often one of the first things software developers are forced to skip. On the other hand, some developers just don’t think tests are worth their time. “They might think, ‘this is a very small feature, anyone can create a test for this, my time should be utilized in something more important.’” Mudit Singh of LambdaTest told me. ... In truth, there are some legitimate limitations to automated tests. Like many complicated matters in software development, the choice to test or not is about understanding the tradeoffs. “Writing automated tests can provide confidence that certain parts of your application work as expected,” Aidan Cunniff, the CEO of Optic told me, “but the tradeoff is that you’ve invested a lot of time ‘stabilizing’ and making ‘reliable’ that part of your system.” ... While tests might have made my new feature better and more maintainable, they were technically a waste of time for the business because the feature wasn’t really what we needed. We failed to invest enough time understanding the problem and making a plan before we started writing code.



Quote for the day:

"Leaders are readers, disciples want to be taught and everyone has gifts within that need to be coached to excellence." -- Wayde Goodall

Daily Tech Digest - October 15, 2021

9 common risk management failures and how to avoid them

Known for decades as the hub of technical innovation, Silicon Valley has evolved into a bastion of toxic "bro culture," according to Alla Valente, senior analyst at Forrester Research. She also cited other forms of toxic work culture when companies fail to mitigate risks that can alienate employees and customers. Facebook's lukewarm response to the Cambridge Analytica scandal, Valente argued, has significantly eroded its trustworthiness and market potential. Wells Fargo's executives turning a blind eye to the warning signs of the bank's predatory selling practices with their customers "was a strategic decision," Valente said. "It could have been fixed, but fixing culture is never easy." ... Efficiency and resiliency sit at opposite ends of the spectrum, Matlock said. Greater efficiency can lead to greater profits when things go well. The auto industry realized significant savings by creating a supply chain of thousands of third-party suppliers spread across multiple tiers. But during the pandemic, there were massive disruptions in supply chains that lacked resiliency. 


Mandating a Zero-Trust Approach for Software Supply Chains

SBOMs are a great first step towards supply-chain transparency, but there is more that needs to be done. As an analogy, many equate the SBOM to the ingredient labels on food. With that perspective, we can see parallels between our software supply chain and the food supply chain. Subsequently, the need for end-to-end provenance and resistance against tampering should be clear. For this reason, I am encouraged by Google’s proposed Supply-Chain Levels for Software Artifacts (SLSA) framework that moves us towards a common language that increases the transparency and integrity of our software supply chain. However, for some software that performs critical functions (e.g., security), food is an inadequate comparison. It may be more apt to compare this type of software to medicine. This analogy brings forth additional considerations. For example, the drug-facts label on medicines includes not just the ingredients, but also usage guidelines and contraindications (i.e., what to look for in case something goes wrong.) Furthermore, as we’ve all seen with the COVID-19 vaccine, medicines must undergo intensive review and testing before it is approved for use. 


Data Consistency Between Microservices

The root of the problem is querying data from other boundaries that will be immediately inconsistent the moment it’s returned, just as in my first example without a serializable transaction. If you’re making HTTP or gRPC calls to other services to retrieve data that you require to perform business logic, you’re dealing with inconsistent data. If you store a local cache copy that’s eventually consistent, you’re dealing with inconsistent data. Is having inconsistent data an issue? Go ask the business. If it is, then you need to get all relevant data within the same boundary that’s required. There are two pieces of data we ultimately needed. We required the Quantity on Hand from the warehouse. In reality in the distribution/warehouse domain, you don’t rely on the “Quantity on Hand”. When dealing with physical goods, the point of truth is actually what is actually in the warehouse, not what the software/database states. ... The system is eventually consistent with the real world. Because of this, Sales has the concept of Available to Promise (ATP) which is a business function for customer order promising based on what’s been ordered but not yet shipped, purchased but not yet received, etc.


5 ways CIOs are redefining teamwork for a hybrid world

Most CIOs face similar grand experiments as hybrid work environments are becoming permanent. They are evaluating which team structures have been successful remotely and are looking to replicate them, while balancing innovation, collaboration, mentorship, and culture transfer, which have traditionally been done in person. Some 30% of IT leaders surveyed by IDC say they prefer an “online-first” policy for collaboration, and practices that started during the pandemic will likely continue indefinitely. While many workers say they have been more productive working remotely, that doesn’t always equate to better teamwork. “We’ve squeezed a lot of innovation out of necessity, but some of that serendipitous innovation that occurs through creative collision has been less,” says Aaron De Smet, senior partner at McKinsey and Co., who spoke at the IDG Future of Work Summit in October. “Companies have started to get their heads around a hybrid workforce, but I don’t think they’ve cracked what hybrid interactions look like. More of the work people do is now part of a cross-functional team. It’s part of a collaborative effort … ” De Smet says. 


3 Signs You’re Ready For A Machine Learning Job When You’ve Come From Another Field

Your sense of direction is what lets you know where you are, or which way to go, even when lingering in unfamiliar territory. Religious people would often argue that a lack of direction is a result of not having a purpose, and to some degree, I agree. You shouldn’t have to wait to have a sense of purpose before you can be happy. Imagine how miserable life would be if that’s the case! When a person begins to question their sense of direction in regards to machine learning, it’s usually to do with a lack of appreciation for how far they’ve come. As you learn more, it’s harder to see the small increments you improve by and this may feel as though you are no longer learning — especially when you compare it to a time when you were learning something new every day. Given you meet the generic requirements of the machine learning role you want, then it’s time to apply for a job that challenges you differently from how you could if you were working alone. Start applying.


Google Opens Up Spanner Database With PostgreSQL Interface

The integration of PostgreSQL into Cloud Spanner is deep; it is not just some conversion overlay. At the database schema level, the PostgreSQL interface for Cloud Spanner supports native PostgresSQL data types and its data description language (DDL), which is a syntax for creating users, tables, and indexes for databases. The upshot is that if you write a schema for the PostgreSQL interface for Cloud Spanner is that it will port to and run on any real PostgreSQL database, which means customers are not trapped on the Google Cloud is they use this service in production and want to switch. But customers do have to be careful. Spanner functions, like table interleaving, have been added to the PostgreSQL layer because they are important features in Spanner. You can get stuck because of these. ... The PostgreSQL interface for Cloud Spanner compiles PostgresSQL queries down to Spanner’s native distributed query processing and storage primitives and does not just support the PostgreSQL wire protocol, which allows for clients and myriad third-party analytics tools to interact with the PostgreSQL database.


The pursuit of transformation: Opportunities and pitfalls

Some transformations fail when there is a lack of alignment between the company’s strategy and its employees, customers and partners. There is a famous fable of an ant trying its hardest to change its trajectory but not realising that it is sitting on an elephant that’s going in the opposite direction. No matter how hard the little ant tries, it will not reach its destination as long as the elephant is not in alignment. All organizations have a culture and an emotional ethos, which if left unaddressed can sabotage the move to change. When Satya Nadella took over Microsoft in 2014, he had to first restructure the company to eliminate destructive internal competition so that all departments could focus on a common services goal. The result is a two and a half fold growth in the stock price over 5 years. On the other hand, when GE decided to launch GE Digital as a transformation vehicle, it did not release the subsidiary from the obligation of quarterly revenue and profitability targets. In addition, the company had to continue to meet GE’s software needs across business units, thereby not having the bandwidth to focus on true innovation and transformation. 


How Machine Learning can be used with Blockchain Technology?

Machine learning algorithms have amazing capabilities of learning. These capabilities can be applied in the blockchain to make the chain smarter than before. This integration can be helpful in the improvement in the security of the distributed ledger of the blockchain. Also, the computation power of ML can be used in the reduction of time taken to find the golden nonce and also the ML can be used for making the data sharing routes better. Further, we can build many better models of machine learning using the decentralized data architecture feature of blockchain technology. Machine learning models can use the data stored in the blockchain network for making the prediction or for the analysis of data purposes. let’s take an example of any smart BT-based application where the data is collecting by different sources such as sensors, smart devices, IoT devices and the blockchain in this application works as an integral part of the application where on the data the machine learning model can be applied for real-time data analytics or predictions. 


The tech recruiter – an unsung hero

The idiom ‘your first impression is your last impression’ holds true for recruiters. They have one opportunity to deliver that perfect elevator pitch to the candidate – convince them why your company provides the best opportunity for them- in the time it takes to ride an elevator. Landing the right impression will determine the candidate’s unalterable opinion and employment decision. To understand this better, let’s take a quick look at the talent landscape today. With the digitalization mega trend sweeping across Tech Inc., organizations are scurrying to bolster their workforce across technology skill sets. Economic Times reported that Indian IT firms plan to hire over 150,000 freshers in FY22 and NASSCOM remarked that India’s five largest companies are likely to hire 96,000 employees this year. Although this will be a huge boost for the $150 Billion industry, the demand-supply technology talent gap is only widening. Today, it is the candidates who hold the power and have the pleasure of the last word as prolonged notice periods allow them time to hedge their bets with the four-five job offers they have on hand. And the more skilled they are, the more offers they juggle.


Better Scrum Through Essence

First an anecdote from Jeff Sutherland – ‘The VP of one of the biggest banks in the country [USA] said recently: “I have 300 product owners and only three were delivering. The other 297 were not delivering”. And, he said, “I checked on where the three that were delivering, where they got the right way of working. They went to your class. So, you need to tell me what you are doing differently.” I said, “What we are doing differently is using Ivar’s work with Essence to really clarify to people what is working, what is not working, what you need to do next to improve things.” By using Essence on many Scrum Master courses we (Jeff, I and others) have also observed that of the 21 components of the original Scrum Essentials, the average team implements 1/3 of them well, 1/3 of them poorly and 1/3 of them not at all. With that level and quality of implementation it is not surprising that we are not always seeing the full potential that Scrum offers. At the heart of getting better Scrum through Essence are the use of the Scrum Foundation, the Scrum Essentials and the Scrum Accelerator practices to play games, facilitate events and drive team improvements.



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - October 15, 2021

You’ve migrated to the cloud, now what?

When thinking about cost governance, for example, in an on-premises infrastructure world, costs increase in increments when we purchase equipment, sign a vendor contract, or hire staff. These items are relatively easy to control because they require management approval and are usually subject to rigid oversight. In the cloud, however, an enterprise might have 500 virtual machines one minute and 5,000 a few minutes later when autoscaling functions engage to meet demand. Similar differences abound in security management and workload reliability. Technologies leaders with legacy thinking are faced with harsh trade-offs between control and the benefits of cloud. These benefits can include agility, scalability, lower cost, and innovation and require heavy reliance on automation rather than manual legacy processes. This means that the skillsets of an existing team may be not the same skillsets needed in the new cloud order. When writing a few lines of code supplants plugging in drives and running cable, team members often feel threatened. This can mean that success requires not only a different way of thinking but also a different style of leadership.


A new edge in global stability: What does space security entail for states?

Observers recently recentred the debate on a particular aspect of space security, namely anti-satellite (ASAT) technologies. The destruction of assets placed in outer space is high on the list of issues they identify as most pressing and requiring immediate action. As a result, some researchers and experts rolled out propositions to advance a transparent and cooperative approach, promoting the cessation of destructive operations in both outer space and launched from the ground. One approach was the development of ASAT Test Guidelines, first initiated in 2013 by a Group of Governmental Experts on Outer Space Transparency and Confidence-Building Measures. Another is through general calls to ban anti-satellite tests, to not only build a more comprehensive arms control regime for outer space and prevent the production of debris, but also reduce threats to space security and regulate destabilising force. Many space community members threw their support behind a letter urging the United Nations (UN) General Assembly to take up for consideration a kinetic anti-satellite (ASAT) Test Ban Treaty for maintaining safe access to Earth orbit and decreasing concerns about collisions and the proliferation of space debris.


From data to knowledge and AI via graphs: Technology to support a knowledge-based economy

Leveraging connections in data is a prominent way of getting value out of data. Graph is the best way of leveraging connections, and graph databases excel at this. Graph databases make expressing and querying connection easy and powerful. This is why graph databases are a good match in use cases that require leveraging connections in data: Anti-fraud, Recommendations, Customer 360 or Master Data Management. From operational applications to analytics, and from data integration to machine learning, graph gives you an edge. There is a difference between graph analytics and graph databases. Graph analytics can be performed on any back end, as they only require reading graph-shaped data. Graph databases are databases with the ability to fully support both read and write, utilizing a graph data model, API and query language. Graph databases have been around for a long time, but the attention they have been getting since 2017 is off the charts. AWS and Microsoft moving in the domain, with Neptune and Cosmos DB respectively, exposed graph databases to a wider audience.


Observability Is the New Kubernetes

So where will observability head in the next two to five years? Fong-Jones said the next step is to support developers in adding instrumentation to code, expressing a need to strike a balance between easy and out of the box and annotations and customizations per use case. Suereth said that the OpenTelemetry project is heading in the next five years toward being useful to app developers, where instrumentation can be particularly expensive. “Target devs to provide observability for operations instead of the opposite. That’s done through stability and protocols.” He said that right now observability right now, like with Prometheus, is much more focused on operations rather than developer languages. “I think we’re going to start to see applications providing observability as part of their own profile.” Suereth continued that the OpenTelemetry open source project has an objective to have an API with all the traces, logs and metrics with a single pull, but it’s still to be determined how much data should be attached to it.


Data Exploration, Understanding, and Visualization

Many scaling methods require knowledge of critical values within the feature distribution and can cause data leakage. For example, a min-max scaler should fit training data only rather than the entire data set. When the minimum or maximum is in the test set, you have reduced some data leakage into the prediction process. ... The one-dimensional frequency plot shown below each distribution provides understanding to the data. At first glance, this information looks redundant, but these directly address problems when representing data in histograms or as distributions. For example, when data is transformed into a histogram, the number of bins is specified. It is difficult to decipher any pattern with too many bins, and with too few bins, the data distribution is lost. Moreover, representing data as a distribution assumes the data is continuous. When data is not continuous, this may indicate an error in the data or an important detail about the feature. The one-dimensional frequency plots fill in the gaps where histograms fail.


DevSecOps: A Complete Guide

Both DevOps and DevSecOps use some degree of automation for simple tasks, freeing up time for developers to focus on more important aspects of the software. The concept of continuous processes applies to both practices, ensuring that the main objectives of development, operation, or security are met at each stage. This prevents bottlenecks in the pipeline and allows teams and technologies to work in unison. By working together, development, operational or security experts can write new applications and software updates in a timely fashion, monitor, log, and assess the codebase and security perimeter as well as roll out new and improved codebase with a central repository. The main difference between DevOps and DevSecOps is quite clear. The latter incorporates a renewed focus on security that was previously overlooked by other methodologies and frameworks. In the past, the speed at which a new application could be created and released was emphasized, only to be stuck in a frustrating silo as cybersecurity experts reviewed the code and pointed out security vulnerabilities.


Skilling employees at scale: Changing the corporate learning paradigm

Corporate skilling programs have been founded on frameworks and models from the world of academia. Even when we have moved to digital learning platforms, the core tenets of these programs tend to remain the same. There is a standard course with finite learning material, a uniformly structured progression to navigate the learning, and the exact same assessment tool to measure progress. This uniformity and standardization have been the only approach for organizations to skill their employees at scale. As a result, organizations made a trade-off; content-heavy learning solutions which focus on knowledge dissemination but offer no way to measure the benefit and are limited to vanity metrics have become the norm for training the workforce at large. On the other hand, one-on-one coaching programs that promise results are exclusive only to the top one or two percent of the workforce, usually reserved for high-performing or high-potential employees. This is because such programs have a clear, measurable, and direct impact on behavioral change and job performance.


The Ultimate SaaS Security Posture Management (SSPM) Checklist

The capability of governance across the whole SaaS estate is both nuanced and complicated. While the native security controls of SaaS apps are often robust, it falls on the responsibility of the organization to ensure that all configurations are properly set — from global settings, to every user role and privilege. It only takes one unknowing SaaS admin to change a setting or share the wrong report and confidential company data is exposed. The security team is burdened with knowing every app, user and configuration and ensuring they are all compliant with industry and company policy. Effective SSPM solutions come to answer these pains and provide full visibility into the company's SaaS security posture, checking for compliance with industry standards and company policy. Some solutions even offer the ability to remediate right from within the solution. As a result, an SSPM tool can significantly improve security-team efficiency and protect company data by automating the remediation of misconfigurations throughout the increasingly complex SaaS estate.


Why gamification is a great tool for employee engagement

Gamification is the beating heart of almost everything we touch in the digital world. With employees working remotely, this is the golden solution for employers. If applied in the right format, gaming can help create engagement in today's remote working environment, motivate personal growth, and encourage continuous improvement across an organization. ... In the connected workspace, gamification is essentially a method of providing simple goals and motivations that rely on digital rather than in-person engagement. At the same time, there is a tacit understanding among both game designer and "player" that when these goals are aligned in a way that benefits the organization, the rewards often impact more than the bottom line. Engaged employees are a valuable part of defined business goals, and studies show that non-engagement impacts the bottom line. At the same time, motivated employees are more likely to want to make the customer experience as satisfying as possible, especially if there is internal recognition of a job well done.


10 Cloud Deficiencies You Should Know

What happens if your cloud environment goes down due to challenges outside your control? If your answer is “Eek, I don’t want to think about that!” you’re not prepared enough. Disaster preparedness plans can include running your workload across multiple availability zones or regions, or even in a multicloud environment. Make sure you have stakeholders (and back-up stakeholders) assigned to any manual tasks, such as switching to backup instances or relaunching from a system restore point. Remember, don’t wait until you’re faced with a worst-case scenario to test your response. Set up drills and trial runs to make sure your ducks are quacking in a row. One thing you might not imagine the cloud being is … boring. Without cloud automation, there are a lot of manual and tedious tasks to complete, and if you have 100 VMs, they’ll require constant monitoring, configuration and management 100 times over. You’ll need to think about configuring VMs according to your business requirements, setting up virtual networks, adjusting for scale and even managing availability and performance. 



Quote for the day:

"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward