Daily Tech Digest - July 23, 2022

How CIOs can unite sustainability and technology

CIOs must be proactive in progressing these organizational shifts, as business leaders will continue to lean on them to ensure company technologies are providing solutions without contributing to an environmental problem. While in years past this was not an active concern, the information and communications technology (ICT) sector has recently become a larger source of climate-related impact. Producing only 1.5% of CO2 in 2007, the industry has now risen to 4% today and will potentially reach 14% by 2040. Fortunately, CIOs can course-correct by focusing on three key areas: Net zero - Utilize green software practices that can reduce energy consumption; Trust - Build systems that protect privacy and are fair, transparent, robust, and accessible; and Governance - Make ESG the focus of technology, not an afterthought. As a first step in this transition, CIOs can begin assessing their organization’s technology through the lens of sustainability to ensure that those goals are being thought about in every facet of the business. In addition, they can connect with other leaders in the company to encourage greater emphasis and dialogue in cross-organization planning for technology solutions as they relate to sustainability targets.


Design patterns for asynchronous API communication

Request and response topics are more or less what they sound like:A client sends a request message through a topic to a consumer; The consumer performs some action, then returns a response message through a topic back to the consumer. This pattern is a little less generally useful than the previous two. In general, this pattern creates an orchestration architecture, where a service explicitly tells other services what to do. There are a couple of reasons why you might want to use topics to power this instead of synchronous APIs:You want to keep the low coupling between services that a message broker gives us. If the service that’s doing the work ever changes, the producing service doesn’t need to know about it, since it’s just firing a request into a topic rather than directly asking a service. The task takes a long time to finish, to the point where a synchronous request would often time out. In this case, you may decide to make use of the response topic but still make your request synchronously. You’re already using a message broker for most of your communication and want to make use of the existing schema enforcement and backwards compatibility that are automatically supported by the tools used with Kafka.


What is Data Gravity? AWS, Azure Pull Data to the Cloud

As enterprises create ever more data, they aggregate, store, and exchange this data, attracting progressively more applications and services to begin analyzing and processing their data. This “attraction” is caused, because these applications and services require higher bandwidth and/or lower latency access to the data. Therefore, as data accumulates in size, instead of pushing data over networks towards applications and services, “gravity” begins pulling applications and services to the data. This process repeats, which produces a compounding effect, meaning that as the scale of data grows, it becomes “heavier” and increasingly difficult to replicate and relocate. Ultimately, the “weight” of this data being created and stored generates a “force” that results in an inability to move the data, hence the term data gravity. Data gravity presents a fundamental problem for enterprises, which is the inability to move data at-scale. Consequently, data gravity impedes enterprise workflow performance, heightens security & regulatory concerns, and increases costs.


Windows 11 is getting a new security setting to block ransomware attacks

The new feature is rolling out to Windows 11 in a recent Insider test build, but the feature is also being backported to Windows 10 desktop and server, according to Dave Weston, vice president of OS Security and Enterprise at Microsoft. "Win11 builds now have a DEFAULT account lockout policy to mitigate RDP and other brute force password vectors. This technique is very commonly used in Human Operated Ransomware and other attacks – this control will make brute forcing much harder which is awesome!," Weston tweeted. Weston emphasized "default" because the policy is already an option in Windows 10 but isn't enabled by default. That's big news and is a parallel to Microsoft's default block on internet macros in Office on Windows devices, which is also a major avenue for malware attacks on Windows systems through email attachments and links. Microsoft paused the default internet macro block this month but will re-release the default macro block soon. The default block on untrusted macros is a powerful control against a technique that relied on end users being tricked into clicking an option to enable macros, despite warnings in Office against doing so.


Untangling Enterprise API Architecture with GraphQL

GraphQL is a query language that allows you to describe your data requirements in a more powerful and developer-friendly way than REST or SOAP. Its composability can help untangle enterprise API architecture. GraphQL becomes the communication layer for your services. Using the GraphQL specification, you get a unified experience when interacting with your services. Every service in your API architecture becomes a graph that exposes a GraphQL API. In this graph, everyone who wants to integrate or consume the GraphQL API can find all the data it contains. Data in GraphQL is represented by a schema that describes the available data structures, the shape of the data and how to retrieve it. Schemas must comply with the GraphQL specification, and the part of the organization responsible for the service can keep this schema coherent. GraphQL composability allows you to combine these different graphs — or subgraphs — into one unified graph. Many tools are available to create such a “graph of graphs."


How The Great Resignation Will Become The Great Reconfiguration

We are witnessing a great reconfiguration of how employees expect to be treated by employers. Henry Ford gave his workers a full two-day weekend as early as 1926, but now a weekend is expected in most office-based jobs—unless the job involves serving customers over the weekend! We have certain expectations of the employer and employee relationship, and what was normal before the pandemic is now being challenged. Even Wall Street cannot hold back the tide. People expect more flexibility over their hours and work location. Within a few years, this will be normalized by the effect of the top talent expecting it and that expectation fitering throughout company culture. This is how work will function post-pandemic. The Great Resignation is the first step, but eventually, I believe we will call the 2020s the Great Reconfiguration. ... WFH will live on - You might want your team back in the office, but they know they can be more productive remotely, and research backs up the employees. A new Harvard study suggests that all that in-person time can be compressed into just one or two days a week.


Will Your Cyber-Insurance Premiums Protect You in Times of War?

Due to the changing market and geopolitical situation, you need to be keenly aware of the exact kind of cyber-insurance coverage your organization requires. Your decisions should be dictated by the industry you're working in, the security risk, and how much you stand to lose in the event of an attack. It's important to note that insurance providers are also being more stringent in their requirements for companies to even obtain cyber coverage in the first place. Carriers are increasingly requiring companies to practice good cyber hygiene and have rigid cybersecurity protocols in place before even offering a quote. Once you have proper cybersecurity protocols in place, you should better qualify for adequate plans. However, remember that no two plans are alike or equally inclusive. When choosing a plan, be sure to look for any fine print regarding act-of-war and terrorism exclusions or those for other "hostile acts." Even when you've done everything right, your carrier can still attempt to deny you coverage under these loopholes.


The new CIO playbook: 7 tips for success from day one

It’s possible that, up to now, your focus has been solely on technology. One of the big differentiators between working on an IT team, even in a leadership role, and being CIO is that you will need to understand how technology fits into the larger business goals of the company. You will need to be a technology translator and advocate for the CEO, business leadership, and board. For that, you have to understand the business first. “We can come up with creative technical solutions,” says Roberge. “We know you need an email system, a CRM system, and an ERP. But how does the business want to use those tools? How is the sales guy going sell product and be able to get a quote out, get the tax requirements, things like that?” Business leaders are unlikely to understand technology the way you do. So, you must understand the business in order to help the other business units, the CEO, and the board understand how technology can fit into their goals. “As technology experts, we know our technology extremely well,” says Roberge.


Explained: How to tell if artificial intelligence is working the way we want it to

Far from a silver bullet, explanation methods have their share of problems. For one, Ghassemi’s recent research has shown that explanation methods can perpetuate biases and lead to worse outcomes for people from disadvantaged groups. Another pitfall of explanation methods is that it is often impossible to tell if the explanation method is correct in the first place. One would need to compare the explanations to the actual model, but since the user doesn’t know how the model works, this is circular logic, Zhou says. He and other researchers are working on improving explanation methods so they are more faithful to the actual model’s predictions, but Zhou cautions that, even the best explanation should be taken with a grain of salt. “In addition, people generally perceive these models to be human-like decision makers, and we are prone to overgeneralization. We need to calm people down and hold them back to really make sure that the generalized model understanding they build from these local explanations are balanced,” he adds.


Future-Proofing Organisations Through Transparency

Partners that trust each other, perform better. Both parties should clearly understand the decisions and actions they own. Consequently, organisations cooperate with less friction and enhance accessibility to relevant information. A study in the Harvard Business Review notes that managers frequently adopt a trust but verify approach, evaluating potential partner behaviours during negotiations to determine whether they are open and honest. As one manager in the study advised, “To see if [the] person is forthcoming; ask a question you know the answer to”. Transparent companies are viewed as ‘ethical’ as their customers believe they have nothing to hide. The new era of the business-to-business model demands transparency. Companies want to know that what they do matters and trace a project back to their organisation’s vision. In a modern world where sustainability is not just a buzzword, clients want to know that partnerships are built with brands that support their morals. Unsatisfied customers disengage with a company to find one that works together to achieve a greater outcome and takes accountability for their actions. 



Quote for the day:

"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward

Daily Tech Digest - July 22, 2022

Can automated test tools eliminate QA?

The traditional quality assurance process is multi-step and requires at least two types of software testers: The first tester exercises data edit and processing functions in applications, and they ensure that all of these processes are working correctly. The second QA tester is more familiar with the business’s needs and how the application should address them. This tester is usually savvy about application technical details as well as the business systems with which the application is going to interact. But there’s more to QA than just these two front-running functions. Applications must be integration-tested to ensure that they interact and exchange data with all of the different systems and data that they work with. They must also be moved to application staging areas where they can be regression tested. This ensures that they don’t break any other existing software with which they interface and that they can run the maximum amount of transactions for which they were designed in production. From an IT standpoint, applications must pass through all of these hurdles before they can go live. 


The downside of digital transformation: why organisations must allow for those who can’t or won’t move online

Through our current research we find the reality of a digitally enabled society is, in fact, far from perfect and frictionless. Our preliminary findings point to the need to better understand the outcomes of digital transformation at a more nuanced, individual level. Reasons vary as to why a significant number of people find accessing and navigating online services difficult. And it’s often an intersection of multiple causes related to finance, education, culture, language, trust or well-being. Even when given access to digital technology and skills, the complexity of many online requirements and the chaotic life situations some people experience limit their ability to engage with digital services in a productive and meaningful way. The resulting sense of disenfranchisement and loss of control is regrettable, but it isn’t inevitable. Some organisations are now looking for alternatives to a single-minded focus on transferring services online. Other organisations are considering partnerships with intermediaries who can work with individuals who find engaging with digital services difficult.


Authentic leadership: Building an organization that thrives

Becoming an authentic leader takes a lot of self-reflection and self-awareness. You’ll need to work to understand yourself and others, using empathy and compassion as your driving force. For examples of authentic leadership in the tech industry, you can look to former CEO of Apple Steve Jobs, former CEO of GE Jack Welch, former CEO of Xerox Anne Mulcahy, and former CEO of IBM Sam Palmisano. These leaders are all known for their authentic leadership styles that helped them drive business success. To become an authentic leader, you’ll need to embark on a path of self-discovery, establish a strong set of values and principles that will guide you in your decision-making, and be completely honest with yourself about who you are. An authentic leader isn’t afraid to make mistakes or to own up to mistakes when they happen. You’ll need to make sure you’re someone who takes accountability, maintains calm under pressure, and can be vulnerable with coworkers and employees. It’s important to know your own strengths and weaknesses as an authentic leader and to identify how you cope with success, failure, and setbacks. 


Reporting to build trust: A framework

Whether you’re preparing an integrated annual report or a stand-alone sustainability report, the publication has to be informed by steps one and two. It’s also critical to put the right resources in place, in terms of both time and people, along with the right incentives and the right oversight. Companies can truly be confident in what they report only when it is subject to board oversight, relevant to the company’s strategy, and has the right governance, systems and controls in place to measure progress towards targets and plans. Many large companies that have teams of hundreds working on financial reporting often have only a handful of people working on sustainability reporting. Even with the best intentions, less-resourced areas have a higher potential to miss something that turns out to be critically important. The business world’s financial reporting capabilities have been built over 170 years. When it comes to sustainability reporting, we need to move quickly to build the right capabilities—using what we’ve learned from financial reporting. And if sustainability reporting is to be on par with financial reporting for informing resource allocation decisions, it needs to be just as robust and relevant. 


Six reasons successful leaders love questions

Comparing questions to dreams is Straus’s way of saying that questions hold the key to better understanding the subconscious dimensions of the person asking the questions. It can be extremely difficult to understand why employees think the way they do, and how to help them change their mindset and behavior if required. It then stands to reason that questions might also help leaders better understand the culture and habits of their organization. In his 1988 article, “Toward a History of the Question,” Dutch philosopher C.E.M. Struyker Boudier writes, “In and by way of his questions the human being can reach out to the divine, and likewise degrade himself to the demonic inferno of evil.” Questioning forces people to the line between good and bad, yes and no, pro and con. Asking questions is closely related to making a choice. We cannot address everything at once, so to ask a question, we must decide what to focus on and how. We have the choice to take an approach that is optimistic or pessimistic, abstract or concrete, individual or collective, broad or narrow, past- or future-oriented, etc. 


Discovering the Versatility of OpenEBS

OpenEBS provides storage for stateful applications running on Kubernetes; including dynamic local persistent volumes (like the Rancher local path provisioner) or replicated volumes using various "data engines". Similarly to Prometheus, which can be deployed on a Raspberry Pi to monitor the temperature of your beer or sourdough cultures in your basement, but also scaled up to monitor hundreds of thousands of servers, OpenEBS can be used for simple projects, quick demos, but also large clusters with sophisticated storage needs. OpenEBS supports many different "data engines", and that can be a bit overwhelming at first. But these data engines are precisely what makes OpenEBS so versatile. There are "local PV" engines that typically require little or no configuration, offer good performance, but exist on a single node and become unavailable if that node goes down. And there are replicated engines that offer resilience against node failures. Some of these replicated engines are super easy to set up, but the ones offering the best performance and features will take a bit more work.


Cyber Resiliency: What It Is and How To Build It

Creating a cyber-resilience plan requires buy-in and input from all parts of the organization, including finance, IT, and operations. “It’s important that departments work together to classify information and risk, as well as to determine where to put controls and where responsibilities lie,” Piker says. “Once a plan has been agreed upon, a budget must be carved out to fund the actual implementation of the plan.” It's important to engage the entire organization. “This is not just a technical issue under the control of a CIO or CISO,” Adkins says. “Your employees and vendors can play a critical role in spotting potential attacks to limit their impact.” Additionally, with the continuing trend toward remote work, employee cyber awareness and training is more important than ever. “This means formal policies, training, exercises simulation, and ongoing analysis of risks,” Adkins says. Adkins advises organizations to use tabletop exercises to test incident practices and times. “It's much easier to fix a flaw in your planning and processes when you’re not in the middle of a crisis,” he says. 


How kitemarks are kicking off IoT regulation

Interestingly, all those we have seen apply for the scheme have chosen to go for Gold because they want to be seen to be adhering to the highest levels and it’s been attracting some big international consumer brands. The smaller players that previously had difficulty understanding and navigating the red tape involved in the Code of Practice/ETSI have also valued the guidance and human touch of an assessor. The theory is that the product assurance scheme will spur compliance ahead of the PSTI, making the transition that much easier for the IoT industry, and the fact that many have aimed high suggests the approach is working. Manufacturers like the visibility conferred by the badge, which then becomes a differentiator in the marketplace, as well as ensuring future compliance. It’s for these reasons that many watching the assurance rollout with interest. IoT kitemark schemes vary internationally, from labels that denote compliance with a set of cybersecurity criteria, to a single label that attests basic security features are provided, to several tiers or even a label that lists cybersecurity information about the IoT device.


4 tips for leading remote IT teams

Traditional enterprises tend to have a “we will train our employees only as much as we have to” mentality. However, this approach will make your employees more likely to seek other opportunities where they feel more valued and prepared. Of course, there is always the risk of employees leaving with their newfound skills, but having undertrained employees can be worse for your business and the organization. Set aside a generous annual budget for training and development and help map out a personalized training path for each employee. This is critical to employee happiness and long-term business planning. These plans should also demonstrate growth opportunities that benefit each employee – not just the organization. In-person training is great, but don’t underestimate the value of virtual training. While a personal connection with instructors can often provide more knowledge and attention, the convenience of virtual training makes it a popular alternative these days. Encourage your employees to explore training opportunities where they’re located.


How Microcontainers Gain Against Large Containers

A microcontainer is an optimized container modified toward better efficiency. It still contains all the files to provide more scaling, isolation, and parity to the software application. However, it is an improved container, with an optimized number of files kept in the image. Important files left in the microcontainer are shell, package manager, and standard C library. In parallel, there exists a concept of ‘distroless’ in a field of containers, where all the unused files are fully extracted from the image. It is worth emphasizing the distinction between the concept of microcontainer and distroless. Microcontainer still contains unused files, as they are required for the system to stay completed. Microcontainer is based on the same system of operation as the regular container and performs all the same functions, with the only difference that its internal files have been enhanced and its size got smaller due to the improvements done by developers. Microcontainer contains an optimized number of files, so it still includes all files and dependencies required for application run, but in a lighter and smaller format. 



Quote for the day:

"The first task of a leader is to keep hope alive." -- Joe Batten

Daily Tech Digest - July 21, 2022

Google Launches Carbon, an Experimental Replacement for C++

While Carbon began as a Google internal project, the development team ultimately wants to reduce contributions from Google, or any other single company, to less than 50% by the end of the year. They ultimately want to hand the project off to an independent software foundation, where its development will be led by volunteers. ... The design wants to release a core working version (“0.1”) by the end of the year. Carbon will be built on a foundation on modern programming principles, including a generics system, that would remove the need to check and recheck the code for each instantiation. Another much needed feature lacking in C++ is memory safety. Memory access bugs are one of the largest culprits of security exploits. Carbon designers will look for ways to better track uninitialized states, design APIs and idioms that support dynamic bounds checks, and build a comprehensive default debug build mode. Over time, the designers plan to build a safe Carbon subset. ... Carbon is for those developers who already have large codebases in C++, which are difficult to convert into Rust. Carbon is specifically what Carruth called a “successor language,” which is built atop of an already existing ecosystem, C++ in this case.


The Cost of Production Blindness

DevOps and SRE are roles that didn’t exist back then. Yet today, they’re often essential for major businesses. They brought with them tremendous advancements to the reliability of production, but they also brought with them a cost: distance. Production is in the amorphous cloud, which is accessible everywhere. Yet it’s never been further away from the people who wrote the software powering it. We no longer have the fundamental insight we took for granted a bit over a decade ago. Yes, and no. We gave up some insight and control and got a lot in return: Stability; Simplicity; and Security. These are pretty incredible benefits. We don’t want to give these benefits up. But we also lost some insight, debugging became harder, and complexity rose. ... Log ingestion is probably the most expensive feature in your application. Removing a single line of log code can end up saving thousands of dollars in ingestion and storage costs. We tend to overlog since the alternative is production issues that we can’t trace to their root cause. We need a middle ground. We want the ability to follow an issue through without overlogging. Developer observability lets you add logs dynamically as needed into production.


UK government introduces data reforms legislation to Parliament

Suggested changes included removing organisations’ requirements to designate data protection officers (DPOs), ending the need for mandatory data protection impact assessments (DPIAs), introducing a “fee regime” for subject access requests (SARs), and removing the requirement to review data adequacy decisions every four years. All of these are now included in the updated Bill in some form. “We now have confirmation of what the UK’s post-GDPR data framework is intended to look like,” said Edward Machin, a senior lawyer in Ropes & Gray’s data, privacy and cyber security practice. ... “The GDPR isn’t perfect and it would be foolish for the UK not to learn from those lessons in its own approach, but it’s walking a tightrope between improvements to the current framework and performative changes for the sake of ripping up Brussels red tape. My initial impressions of the Bill are that the government has struck the balance in favour of business and overlooked some civil society concerns, so I would think that reduced rights and safeguards for individuals will be areas that are targeted for revision before the Bill is finalised.”
 

Hackers can spoof commit metadata to create false GitHub repositories

Researchers identified that a threat actor could tamper with commit metadata to make a repository appear older than it is. Or else, they can deceive developers by promoting the repositories as trusted since reputable contributors are maintaining them. It is also possible to spoof the committer’s identity and attribute the commit to a genuine GitHub account. For your information, with open source software, developers can create apps faster and even skip third-party’s code auditing if they are sure that the source of software is reliable. They can choose GitHub repositories maintained actively, or their contributors are trustworthy. Checkmarx researchers explained in their blog post that threat actors could manipulate the timestamps of the commits, which are listed on GitHub. Fake commits can also be generated automatically and added to the user’s GitHub activity graph, allowing the attacker to make it appear active on the platform for a long time. The activity graph displays activity on private and public repositories, making it impossible to discredit the fake commits.


Hackers turn to cloud storage services in attempt to hide their attacks

The group is widely believed to be linked to the Russian Foreign Intelligence Service (SVR), responsible for several major cyberattacks, including the supply chain attack against SolarWinds, the US Democratic National Committee (DNC) hack, and espionage campaigns targeting governments and embassies around the world. Now they're attempting to use legitimate cloud services, including Google Drive and Dropbox – and have already used this tactic as part of attacks that took place between May and June this year. The attacks begin with phishing emails sent out to targets at European embassies, posing as invites to meetings with ambassadors, complete with a supposed agenda attached as a PDF. The PDF is malicious and, if it worked as intended, it would call out to a Dropbox account run by the attackers to secretly deliver Cobalt Strike – a penetration-testing tool popular with malicious attackers – to the victim's device. However, this initial call out was unsuccessful earlier this year, something researchers suggest is down to restrictive policies on corporate networks about using third-party services.

 

How Zero Trust can stop the catastrophic outcomes of cyberattacks on critical infrastructure

The impending necessity of Zero Trust should be recognised by every government and CNI provider around the world if they are to have any hopes of mitigating sophisticated attacks like ransomware. Critical Infrastructure is the backbone of a country’s economy and social order. It is impossible to maintain a sustainable society when sectors like emergency healthcare, energy distribution, food and agriculture, education, and financial services are constantly under disruptive threats. In May 2021, the US government issued an executive order for federal government agencies, to improve their cybersecurity postures and recommended moving toward a Zero Trust architecture as the solution. Following this executive order, the Pentagon launched a Zero Trust office in December 2021 and in January 2022, President Biden further emphasised the urgency of moving to a Zero Trust architecture by mandating all government agencies to achieve specific Zero Trust goals by the end of the Fiscal Year 2024.


Transparency in the shadowy world of cyberattacks

Focusing on the fundamentals of software security is in some ways more important to raise all of us above the level of insecurity we see today. We curate and use threat intelligence to protect billions of users–and have been doing so for some time. But you need more than intelligence, and you need more than security products–you need secure products. Security has to be built in, not just bolted on. Aurora showed us that we (and many in the industry) were doing cybersecurity wrong. Security back then was often “crunchy on the outside, chewy in the middle.” Great for candy bars, not so great for preventing attacks. We were building high walls to keep bad actors out, but if they got past those walls, they had wide internal access. The attack helped us recognize that our approach needed to change–that we needed to double down on security by design. We needed a future-oriented network, one that reflected the openness, flexibility, and interoperability of the internet, and the way people and organizations were already increasingly working. In short, we knew that we had to redesign security for the Cloud.


The importance of secure passwords can’t be emphasized enough

Mobile phones are a main and often overlooked concern. We found that 30% of respondents do not use antivirus on their phones, meaning they are not properly securing their devices. This is especially a concern as the demographic most often on their phones are also the ones who are less worried about online threats and vulnerabilities. Password managers, passwords stored in an electronic file and or in physical format are used most frequently for work devices and least frequently for personal phones. The Autofill option and password managers are used most often by 25-44-year-olds and hard format is used more by those between 55-65. But even if work accounts are secure, that doesn’t mean that sensitive information from work doesn’t carry over onto personal phones. Email and communication apps connected to work accounts are often downloaded onto personal devices, and if someone uses the same passwords across accounts, their personal devices being compromised means their work ones are as well.


Unlocking the potential of AI to increase customer retention

A true AI-fuelled CRM goes beyond simple automation. To provide real benefit, AI must aggregate data from multiple different sources — including in house-sales, marketing, and service tools. It needs to break down organisational silos to identify patterns in interactions and offer deeper customer insights. Some feel they don’t necessarily have enough primary data to build effective predictive models. There are vast amounts of organisational data generated around a single customer or prospect. The trick is to leverage a CRM that understands and captures all of these interactions in a format that can fuel AI initiatives. By breaking down the silos between business units and integrating all of the valuable data that they hold, organisations will be able to benefit from the most advanced predictive models. This is often more challenging than it should be to implement. Business systems are typically good at providing a snapshot of an organisation on any given day, but they aren’t usually as good at gathering historical information. 


Burnout: 3 steps to prevent it on your team

Company culture doesn’t just happen. Leaders must actively maintain and shape it to identify ongoing opportunities that empower employees to support and contribute to it. Employee contributions can be as small as internal pulse surveys or as large as designing new groups or initiatives. Think about creating a club to encourage the workforce to participate in the hiring process and weigh in on how candidates would mesh with internal teams. This engagement would directly shape how the organization operates and builds positive working environments for employees – no matter the physical or remote work setting. By opening the door for employees to get involved and provide input, leaders can identify signs of fatigue earlier, address pain points before employees reach the pinnacle of exhaustion, and create a community that motivates and engages the workforce. ... Too often, leaders view benefits as the silver bullet to burnout. But benefits alone won’t cure feelings of burnout. If your workforce is giving direct feedback on areas that need improvement, simply listening is not enough. Take action to meet these needs and make your actions known.



Quote for the day:

"Take time to deliberate; but when the time for action arrives, stop thinking and go in." - - Andrew Jackson

Daily Tech Digest - July 20, 2022

CIOs contend with rising cloud costs

“A lot of our clients are stuck in the middle,” says Ashley Skyrme, senior managing director and leader of the Global Cloud First Strategy and Consulting practice at Accenture. “The spigot is turned on and they have these mounting costs because cloud availability and scalability are high, and more businesses are adopting it.” And as the migration furthers, cloud costs soon rank second — next only to payroll — in the corporate purse, experts say. The complexity of navigating cloud use and costs has spawned a cottage industry of SaaS providers lining up to help enterprises slash their cloud bills. ... “Cloud costs are rising,” says Bill VanCuren, CIO of NCR. “We plan to manage within the large volume agreement and other techniques to reduce VMs [virtual machines].” Naturally, heavy cloud use is compounding the costs of maintaining or decommissioning data centers that are being kept online to ensure business continuity as the migration to the cloud continues. But more significant to the rising cost problem is the lack of understanding that the compute, storage, and consumption models on the public cloud are varied, complicated, and often misunderstood, experts say.


How WiFi 7 will transform business

In practice, WiFi 7 might not be rolled out for another couple of years — especially as many countries have yet to delicense the new 6GHz spectrum for public use. However, it is coming, and so it’s important to plan for this development as plans could progress quicker than we first thought. In the same way as bigger motorways are built and traffic increases to fill them, faster, more stable WiFi will encourage more usage & users, and to quote the popular business mantra: “If you build it…they will come….”. WiFi 7 is a significant improvement over all the past WiFi standards. It uses the same spectrum chunks as WiFi 6/6e, and can deliver data more than twice as fast. It has a much wider bandwidth for each channel as well as a raft of other improvements. It is thought that WiFi 7 could deliver speeds of 30 gigabits per second (Gbps) to compatible devices and that the new standard could make running cables between devices completely obsolete. It’s now not necessarily about what you can do with the data, but how you actually physically interact with it. 


How to Innovate Fast with API-First and API-Led Integration

Many have assembled their own technologies as they have tried to deliver a more productive, cloud native platform-as-a-shared-service that different teams can use to create, compose and manage services and APIs. They try to combine integration, service development and API-management technologies on top of container-based technologies like Docker and Kubernetes. Then they add tooling on top to implement DevOps and CI/CD pipelines. Afterward comes the first services and APIs to help expose legacy systems via integration, for example. When developers have access to such a platform within their preferred tools and can reuse core APIs instead of spending time on legacy integration, it means they can spend more time on designing and building the value-added APIs faster. At best, a group can use all the capabilities because it spreads the adoption of best practices, helps get teams ramped up faster and makes them deliver quicker. But at the very least, APIs should be shared and governed together.


Using Apache Kafka to process 1 trillion inter-service messages

One important decision we made for the Messagebus cluster is to only allow one proto message per topic. This is configured in Messagebus Schema and enforced by the Messagebus-Client. This was a good decision to enable easy adoption, but it has led to numerous topics existing. When you consider that for each topic we create, we add numerous partitions and replicate them with a replication factor of at least three for resilience, there is a lot of potential to optimize compute for our lower throughput topics. ... Making it easy for teams to observe Kafka is essential for our decoupled engineering model to be successful. We therefore have automated metrics and alert creation wherever we can to ensure that all the engineering teams have a wealth of information available to them to respond to any issues that arise in a timely manner. We use Salt to manage our infrastructure configuration and follow a Gitops style model, where our repo holds the source of truth for the state of our infrastructure. To add a new Kafka topic, our engineers make a pull request into this repo and add a couple of lines of YAML. 


Load Testing: An Unorthodox Guide

A common shortcut is to generate the load on the same machine (i.e. the developer’s laptop), that the server is running on. What’s problematic about that? Generating load needs CPU/Memory/Network Traffic/IO and that will naturally skew your test results, as to what capacity your server can handle requests. Hence, you’ll want to introduce the concept of a loader: A loader is nothing more than a machine that runs e.g. an HTTP Client that fires off requests against your server. A loader sends n-RPS (requests per second) and, of course, you’ll be able to adjust the number across test runs. You can start with a single loader for your load tests, but once that loader struggles to generate the load, you’ll want to have multiple loaders. (Like 3 in the graphic above, though there is nothing magical about 3, it could be 2, it could be 50). It’s also important that the loader generates those requests at a constant rate, best done asynchronously, so that response processing doesn’t get in the way of sending out new requests. ... Bonus points if the loaders aren’t on the same physical machine, i.e. not just adjacent VMs, all sharing the same underlying hardware. 


Open-Source Testing: Why Bug Bounty Programs Should Be Embraced, Not Feared

There are two main challenges: one around decision-making, and another around integrations. Regarding decision-making, the process can really vary according to the project. For example, if you are talking about something like Rails, then there is an accountable group of people who agree on a timetable for releases and so on. However, within the decentralized ecosystem, these decisions may be taken by the community. For example, the DeFi protocol Compound found itself in a situation last year where in order to agree to have a particular bug fixed, token-holders had to vote to approve the proposal. ... When it comes to integrations, these often cause problems for testers, even if their product is not itself open-source. Developers include packages or modules that are written and maintained by volunteers outside the company, where there is no SLA in force and no process for claiming compensation if your application breaks because an open-source third party library has not been updated, or if your build script pulls in a later version of a package that is not compatible with the application under test.


3 automation trends happening right now

IT automation specifically continues to grow as a budget priority for CIOs, according to Red Hat’s 2022 Global Tech Outlook. While it’s outranked as a discrete spending category by the likes of security, cloud management, and cloud infrastructure, in reality, automation plays an increasing role in each of those areas. ... While organizations and individuals automate tasks and processes for a bunch of different reasons, the common thread is usually this: Automation either reduces painful (or simply boring) work or it enables capabilities that would otherwise be practically impossible – or both. “Automation has helped IT and engineering teams take their processes to the next level and achieve scale and diversity not possible even a few years ago,” says Anusha Iyer, co-founder and CTO of Corsha. ... Automation is central to the ability to scale – quickly, reliably, and securely – distributed systems, whether viewed from an infrastructure POV (think hybrid cloud and multi-cloud operations), application architecture POV, security POV, or though virtually any other lens. Automation is key to making it work.


CIO, CDO and CTO: The 3 Faces of Executive IT

Most companies lack experience with the CDO and CTO positions. This makes these positions (and those filling them) vulnerable to failure or misunderstanding. The CIO, who has supervised most of the responsibilities that the CDO and CTO are being assigned, can help allay fears, and benefit from the cooperation, too. This can be done by forging a collaborative working partnership with both the CDO and CTO, which will need IT’s help. By taking a pivotal and leading role in building these relationships, the CIO reinforces IT’s central role, and helps the company realize the benefits of executive visibility of the three faces of IT: data, new technology research, and developing and operating IT business operations. Many companies opt to place the CTO and CDO in IT, where they report to the CIO. Sometimes this is done upfront. Other times, it is done when the CEO realizes that he/she doesn't have the time or expertise to manage three different IT functions.. This isn't a bad idea since the CIO already understands the challenges of leveraging data and researching new technologies.


Log4j: The Pain Just Keeps Going and Going

Why is Log4j such a persistent pain in the rump? First, it’s a very popular, open source Java-based logging framework. So it’s been embedded into thousands of other software packages. That’s no typo. Log4j is in thousands of programs. Adding insult to injury, Log4j is often deeply embedded in code and hidden from view due to being called in by indirect dependencies. So, the CSRB stated that “Defenders faced a particularly challenging situation; the vulnerability impacted virtually every networked organization, and the severity of the threat required fast action.” Making matters worse, according to CSRB, “There is no comprehensive ‘customer list’ for Log4j or even a list of where it is integrated as a subsystem.”  ... The pace, pressure, and publicity compounded the defensive challenges: security researchers quickly found additional vulnerabilities in Log4j, contributing to confusion and ‘patching fatigue’; defenders struggled to distinguish vulnerability scanning by bona fide researchers from threat actors, and responders found it difficult to find authoritative sources of information on how to address the issues,” the CSRB said.


Major Takeaways: Cyber Operations During Russia-Ukraine War

The operational security expert known as the grugq says Russia did disrupt command-and-control communications - but the disruption failed to stymie Ukraine's military. The government had reorganized from a "Soviet-style" centralized command structure to empower relatively low-level military officers to make major decisions, such as blowing up runways at strategically important airports before they were captured by Russian forces. Lack of contact with higher-ups didn't compromise the ability of Ukraine's military to physically defend the country. ... Another surprising development is the open involvement of Western technology companies in Ukraine's cyber defense, WithSecure's Hypponen says. "I'm surprised by the fact that Western technology companies like Microsoft and Google are there on the battlefield, supporting Ukraine against governmental attacks from Russia, which is again, something we've never seen in any other war." Western corporations aren't alone, either. Kyiv raised a first-ever volunteer "IT Army," consisting of civilians recruited to break computer crime laws in aid of the country's military defense.



Quote for the day:

"Leadership is a way of thinking, a way of acting and, most importantly, a way of communicating." -- Simon Sinek

Daily Tech Digest - July 19, 2022

Open source isn’t working for AI

It’s hard to trust AI if we don’t understand the science inside the machine. We need to find ways to open up that infrastructure. Loukides has an idea, though it may not satisfy the most zealous of free software/AI folks: “The answer is to provide free access to outside researchers and early adopters so they can ask their own questions and see the wide range of results.” No, not by giving them keycard access to Facebook’s, Google’s, or OpenAI’s data centers, but through public APIs. It’s an interesting idea that just might work. But it’s not “open source” in the way that many desire. That’s probably OK. ... Because open source is inherently selfish, companies and individuals will always open code that benefits them or their own customers. Always been this way, and always will. To Loukides’ point about ways to meaningfully open up AI despite the delta between the three AI giants and everyone else, he’s not arguing for open source in the way we traditionally did under the Open Source Definition. Why? Because as fantastic as it is (and it truly is), it has never managed to answer the cloud open source quandary—for both creators and consumers of software—that DiBona and Zawodny laid out at OSCON in 2006.


Botnet malware disguises itself as password cracker for industrial controllers

What's weird is that the malware also deploys code to check the clipboard contents for cryptocurrency wallet addresses, and silently rewrites those details to point to another wallet so as to steal people's funds. Remember, this is running on PCs normally connected to industrial equipment, so perhaps the crooks behind this caper just grabbed some generic nasty to use. "Dragos assesses with moderate confidence the adversary, while having the capability to disrupt industrial processes, has financial motivation and may not directly impact Operational Technology (OT) processes," the team wrote. The Sality malware family has been around for almost two decades, first being detected in 2003, and can be commanded by its masterminds to perform other malicious actions, such as attacking routers, F-Secure analysts wrote in a report. Sality maintains persistence on the host PC through process injection and file infection, and abusing Windows' autorun functionality to spread copies of itself over USB, network shares, and external storage drives, according to Dragos.


Rescale and Nvidia partner to automate industrial metaverse

The new partnership between Rescale and Nvidia will allow enterprises to connect workflows between Rescale’s existing catalog of engineering and scientific containers, Nvidia’s extensive NGC offerings, and enterprises’ standard containers of their own models and supporting software. This new containerized approach to engineering software means teams can specify the software libraries and configurations that reflect industry best practices. The recent Nvidia and Siemens partnership is an ambitious effort to bring together physics-based digital models and real-time AI. Rescale’s announcement with Nvidia enhances this partnership, as accelerated computing combined with high-performance computing is the foundation that powers these use cases. For example, enterprises can take advantage of Nvidia’s work on Modulus, which uses AI to speed up physics simulations hundreds or thousands of times. Siemens estimates that integrating physics and AI models could help save the power industry $1.7 billion in reduced turbine maintenance. The partnership could also make it easier for companies to integrate other apps that work on these tools.


Uber Files leak shows why India’s approach to security and privacy matters

In the Uber Files investigation led by The Guardian under the International Consortium of Investigative Journalists or ICAJ, the leaked documents provide evidence of law breaking, lobbying world leaders, using stealth technologies to evade raids and opaque algorithms deployed by the Uber Corporation in 2012-16. In the documents, there have been instances where Uber executives have sanctioned the use of stealth technologies like the ‘Kill Switch’ to evade regulations and efforts of investigative agencies for a fair probe in India, Belgium and other countries. On similar lines, reports indicate that e-commerce giant Amazon spent more than Rs 8,000 crore in India on legal fees in 2018-20. There are numerous incidents like these where regulators are in a Catch-22 situation of regulation and innovation. The Uber Files tell how technology platforms deploy a multi-pronged strategy to subvert public opinion with sponsored academic work, allying with public officials, and wilfully stifling investigations of law enforcement agencies to dodge regulatory efforts for better transparency, accountability and public scrutiny of their architecture.


Is Microsoft’s VS Code really open source?

“Microsoft modifies VS Code in a way that a non-Microsoft VS Code fork can’t use extensions from the official Microsoft VS Code store. Not only that, some of the VS Code extensions developed and released by Microsoft will only work in the VS Code released by Microsoft and won’t work on non-Microsoft VS Code forks,” mentioned Ranatunge in his blog post. Microsoft has made similar moves in the past. It modified the open-source cross-platform IDE MonoDevelop as Visual Studio for Mac. The Visual Studio for Mac has three versions- for students, professionals and enterprises. While the students’ version is free and supports classroom learning, individual developers and small companies must log in via IDE to access the other versions. In 2021, Microsoft abruptly removed the Hot Reload functionality from the open-source .NET SDK, only to revoke it later as it had enraged the .NET community. As stated, Microsoft follows an open-core model for VS code. Therefore, developers who want access to the full open source code that is MIT licensed will have to download the code from the repository and then build the VS code on their own.


Open source security needs automation as usage climbs amongst organisations

"OSS is not insecure per se…the challenge is with all the versions and components that may make up a software project," he explained. "It is impossible to keep up without automation and prioritisation." He noted that the OSS community was responsive in addressing security issues and deploying fixes, but organisations tapping OSS would have to navigate the complexity of ensuring their software had the correct, up-to-date codebase. This was further compounded by the fact that most organisations would have to manage many projects concurrently, he said, stressing the importance of establishing a holistic software security strategy. He further pointed to the US National Institute of Standards and Technology (NIST), which offered a software supply chain framework that could aid organisations in planning their OSS security response. Asked if regulations were needed to drive better security practices, Liu said most companies saw cybersecurity as a cost and would not want to address it actively in the absence of any incentive.


How To Minimize the Impacts of Shadow IT on Your Business

Organizations looking to manage and mitigate the negative impacts of shadow IT must first perform an internal audit. Cloud security applications such as Microsoft’s Cloud App Security detect unsanctioned usage of applications and data. But detecting shadow IT is only one part of the equation. Companies should work to address the root causes. This may include optimizing communications between departments – particularly the IT team and other departments. If one department discovers a software solution that may be beneficial, they should feel comfortable approaching the IT team. CIOs and IT staff should develop processes that allow them to streamline software assessment and procurement. They should be able to give in-depth reasons why a particular tool suggested by a non-IT employee may be impracticable. Additionally, it is recommended that IT staff suggest a better alternative if they reject a proposed tool. Organizations should consider training non-IT staff in cybersecurity literacy and awareness. 


Gatling vs JMeter - What to Use for Performance Testing

There's a saying that every performance tester should know: "lies, damn lies, and statistics." If they don't know it yet, they will surely learn it in a painful way. A separate article could be written about why this sentence should be the mantra in the performance test area. In a nutshell: median, arithmetic mean, standard deviation are completely useless metrics in this field (you can use them only as an additional insight). You can get more detail on that in this great presentation by Gil Tene, CTO and co-founder at Azul. Thus, if the performance testing tool only provides this static data, it can be thrown right away. The only meaningful metrics to measure and to compare performance are the percentiles. However, you should also use them with some suspicion about how they were implemented. Very often the implementation is based on the arithmetic mean and standard deviation, which, of course, makes them equally useless. ... Another approach would be to check the source code of implementation yourself. I regret that most of the performance test tools documentation does not cover how percentiles are calculated. 


BlackCat Adds Brute Ratel Pentest Tool to Attack Arsenal

Sophos investigators found that the attacker used commercially available tools such as AnyDesk and TeamViewer and also installed nGrok, an open-source remote access tool. "The attackers also used PowerShell commands to download and execute Cobalt Strike beacons on some machines, and a tool called Brute Ratel, which is a more recent pen-testing suite with Cobalt Strike-like remote access features," Brandt says. Sophos researchers found that the Brute Ratel binary was installed as a Windows service named wewe in an affected machine. One of the bigger challenges for the Sophos investigators was that some of the targeted organizations were running the same servers that were compromised using the Log4j vulnerability. Apart from ransoming systems on the network, the threat actors collected and exfiltrated sensitive data from the targets and uploaded large volumes of data to Mega, a cloud storage provider. The attackers used a third-party tool called DirLister to create a list of accessible directories and files, or in some cases used a PowerShell script from a pen tester toolkit, called PowerView.ps1, to enumerate the machines on the network. 


Removing the blind spots that allow lateral movement

One of the biggest challenges of lateral movement detection is its low anomaly factor. Lateral movement attacks exploit the gaps in an organization’s user authentication process. Such attacks tend to remain undetected because the authentication performed by the attacker is essentially identical to the authentication made by a legitimate user. Following the initial “patient zero” compromise, the attacker uses valid credentials to log in to organizational systems or applications. Therefore, the standard IAM infrastructure in place legacy cannot detect any anomaly during this process, which allows attackers to slip through and remain in the network undetected. Another key challenge is the potential mismatch or disparity between endpoint and identity protection aspects. Endpoint protection solutions are mainly focused on detecting anomalies in file and process execution. However, the attacker gains access by exploiting the legitimate authentication infrastructure, utilizing legitimate files and process. Therefore, it doesn’t appear on the radar of endpoint solutions.



Quote for the day:

"Sport fosters many things that are good; teamwork and leadership" -- Daley Thompson

Daily Tech Digest - July 18, 2022

Cyber Safety Review Board warns that Log4j event is an “endemic vulnerability”

According to the report, "The pace, pressure, and publicity compounded the defensive challenges." As a result, researchers found additional vulnerabilities in Log4j, contributing to confusion and "patching fatigue," and "responders found it difficult to find authoritative sources of information on how to address the issues. This frenetic period culminated in one of the most intensive cybersecurity community responses in history." ... The few organizations that responded effectively to the event "understood their use of Log4j and had technical resources and mature processes to manage assets, assess risk, and mobilize their organization and key partners to action. Most modern security frameworks call out these capabilities as best practices." ... A fog still hovers over the event because, "No authoritative source exists to understand exploitation trends across geographies, industries or ecosystems. Many organizations do not even collect information on specific Log4j exploitation, and reporting is still largely voluntary. Most importantly, however, the Log4j event is not over."


DTN’s CTO on combining IT systems after a merger

Enterprises often make strategic errors when combining IT systems following an acquisition, Ewe says. “The number one mistake I see is, ‘Since we acquired you, clearly we win,’” he says. “Just because A bought B, you don’t want to assume that A has better technology than B.” Another common mistake is to go solely by the numbers, picking one company’s IT system over the other’s because it has the highest revenue or profitability, he says: “The issue there is that you’re oversimplifying the process.” Given the investment in time and money necessary to merge two companies’ IT systems, “it’s worthwhile spending an extra few weeks up-front to make a more thorough analysis of which solution or which pieces of which solutions should come together,” Ewe says. Jumping straight in and making a wrong decision can cost more in the long term. Ewe consulted with product and sales management, and with customers, to identify the needs DTN’s single engine would have to satisfy, as well as the use cases it would serve, before evaluating the existing assets against those needs. 


Ransomware and backup: Overcoming the challenges

Recovering data after a ransomware attack is more complex and more risky than recovery from a system outage or natural disaster. The greatest risk is that backups contain undetected ransomware, which then replicate into the production system or recovered systems. This risk is reduced by using air-gapped copies and immutable copies and snapshots, and keeping more copies than would be required for conventional backup alone. This requires a more cautious approach to data recovery, and one that can be at odds with the commercial pressures for short RTOs and recent RPOs. Matters are made more difficult because there are no viable, fool-proof systems that can scan data for ransomware before it is backed up, says Barnaby Mote, managing director at backup specialist Databarracks. “Before ransomware was a thing, replicating data from production systems to DR as quickly as possible was a sound recovery strategy for conventional disasters,” he says. “Now, with ransomware, it has the opposite of the desired effect, rendering recovery systems unusable.”


Continuous Intelligence: Definition, Benefits, and Examples

While humans cannot inspect every possible characteristic and combination in the flood of incoming data, machines can. Complementing analytics that provide precise answers to questions users know to ask, a machine can continuously monitor data in the background to detect unknown correlations and trends that deviate from what would have been expected by the system based on previous observations. This way, companies can identify hidden, but potentially relevant signals in the data. Gartner predicts that by 2022, more than half of major new business systems will incorporate continuous intelligence capabilities. By integrating artificial intelligence (AI)-based continuous intelligence into their day-to-day operations, companies can:Boost efficiency by spending less time sifting through data from a variety of disparate sources; Focus on what really matters for their business; Speed time to action. By automatically inspecting critical business health indicators such as revenue, Web page views, active users, or transaction volume in real time, businesses can accelerate their time to insight and action and better respond to situations before business is impacted.


7 reasons Java is still great

As a longtime Java programmer, it was surprising—astonishing, actually—to watch the language successfully incorporate lambdas and closures. Adding functional constructs to an object-oriented programming language was a highly controversial and impressive feat. So was absorbing concepts introduced by technologies like Hibernate and Spring (JSR 317 and JSR 330, respectively) into the official platform. That such a widely used technology can still integrate new ideas is heartening. Java's responsiveness helps to ensure the language incorporates useful improvements. it also means that developers know they are working within a living system, one that is being nurtured and cultivated for success in a changing world. Project Loom—an ambitious effort to re-architect Java’s concurrency model—is one example of a project that underscores Java's commitment to evolving. Several other proposals currently working through the JCP demonstrate a similar willingness to go after significant goals to improve Java technology. The people working on Java are only half of the story. The people who work with it are the other half, and they are reflective of the diversity of Java's many uses.


Search Here: Ransomware Groups Refine High-Pressure Tactics

Ransomware groups continue to refine the tactics they use to better pressure victims into paying. And they're succeeding. "In recent months, we have seen an increase in the number of ransomware attacks and ransom amounts being paid," the heads of Britain's lead cybersecurity agency and privacy watchdog warned last week in an open letter to the legal industry. The impetus for the alert from Britain's National Cyber Security Center - the public-facing arm of intelligence agency GCHQ - and the Information Commissioner's Office: They're urging solicitors to never advise clients to pay a ransom. Doing so will not lessen any penalties the ICO might levy, helps perpetuate the ransomware business model and could violate U.S. sanctions, they say. But the increase in ransoms being paid speaks to the success of ransomware groups' continuing innovation. Psychological pressure remains a specialty. After infecting systems, many types of ransomware reboot infected PCs to a lock screen that lists the ransom demand, a cryptocurrency wallet address for routing funds and a countdown timer. 


Functional programming is finally going mainstream

For some, using an object-oriented language like Java, JavaScript, or C# for functional programming can feel like swimming upstream. “A language can steer you towards certain solutions or styles of solutions,” says Gabriella Gonzalez, an engineering manager at Arista Networks. “In Haskell, the path of least resistance is functional programming. You can do functional programming in Java, but it’s not the path of least resistance.” A bigger issue for those mixing paradigms is that you can’t expect the same guarantees you might receive from pure functions if your code includes other programming styles. “If you’re writing code that can have side effects, it’s not functional anymore,” Williams says. “You might be able to rely on parts of that code base. I’ve made various functions that are very modular, so that nothing touches them.” Working with strictly functional programming languages makes it harder to accidentally introduce side effects into your code. “The key thing about writing functional programming in something like C# is that you have to be careful because you can take shortcuts and then you’ve got the exact sort of mess you would have if you weren’t using functional programming at all,” Louth says.


Safeguarding the open source model amidst big tech involvement

Two of the main techniques to safeguard open source and its community are through smart licensing tactics and constant innovation. The first technique is to simply switch the project licence from an open source licence to a more restrictive licence. There are two specific licences that can be used to protect against clouds and corporations: AGPL-3 and SSPL — specifically developed by the likes of MongoDB, Elastic and Grafana to protect themselves from AWS. For instance, while many projects shifted away from GPL-style licences towards more permissive forms of licensing, under GPL, contributors are required to make their code available to the open source community; the so-called “copyleft”. This traditional licensing style helps to create a more open, transparent ecosystem. Another way in which open source can safeguard its future is through smart innovations. Constantly innovating in order to satisfy users should be the way forward for the evolution of open source projects and solutions. This would enable companies to maintain their competitive edge and keep up with technological trends. 


5 ways fear can derail your digital transformation strategy

When we confront new work technologies such as a hybrid workplace, virtual meeting rooms, or new software, we tend to resist or avoid them simply because they’re new and we’re not used to them. This creates division. A company looking to offer a hybrid workplace might encounter resistance from employees, managers, and even customers who refuse to recognize this arrangement. What appears to be a simple reluctance to change is actually a deep-seated fear of changing a comfortable status quo. What you can do about this: Offer facts to neutralize fear. People often use their own frame of reference if they are not given something tangible to hold on to. If the change involves new technology, demonstrate the technology. Let them see how it works. If the change is organizational, such as a hybrid workspace, present the facts about how it will work, what will change, and what will stay the same. Listen to and respond to their questions and objections. Humans are dominated by emotion, and logic is always playing catch-up. 


The Four P's of Pragmatically Scaling Your Engineering Organization

Your people aren’t just the heart and soul of the company, they’re the building blocks for its future. When you're growing rapidly it can be tempting to add developers to your team as quickly as possible, but it's important to first consider your company goals while remaining practical about how you’re scaling. This is the key foundation for building the right organization. ... Scaling your processes comes down to practical prioritization. It is crucial to clearly establish processes that balance both short- and long-term wins for the company, beginning with the systems that need to be fixed immediately. Start by instituting a planning process looking at things from both an annual perspective and quarterly, or even monthly– and try not to get bogged down deliberating over a planning methodology in the first stage. ... Scaling the platform is often the biggest challenge organizations face in the hyper-growth phase. But it’s important to remember that building toward a north star doesn’t mean that you’re building the north star. Now is the time to focus on intentional, iterative improvement of the platform rather than implementing sweeping changes to your product.



Quote for the day:

"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind