Showing posts with label Lean IT. Show all posts
Showing posts with label Lean IT. Show all posts

Daily Tech Digest - March 11, 2024

Generative AI is even more of a mixed bag when it comes to writing secure code. Many hope that, by ingesting best coding practices from public code repositories — possibly augmented by a company’s own policies and frameworks — the code AI generates will be more secure right from the very start and avoid the common mistakes that human developers make. ... Generative AI has the potential to help DevSecOps teams to find vulnerabilities and security issues that traditional testing tools miss, to explain the problems, and to suggest fixes. It can also help with generating test cases. Some security flaws are still too nuanced for these tools to catch, says Carnegie Mellon’s Moseley. “For those challenging things, you’ll still need people to look for them, you’ll need experts to find them.” However, generative AI can pick up standard errors. ... A bigger question for enterprises will be about automating the generative AI functionality — and how much to have humans in the loop. For example, if the AI is used to detect code vulnerabilities early on in the process. “To what extent do I allow code to be automatically corrected by the tool?” Taglienti asks. 


White House Advisory Team Backs Cybersecurity Tax Incentives

Technology trade groups and cybersecurity experts have long called for financial incentives to help drive the implementation of new cybersecurity standards, but proposals differ on how to best encourage industries to prioritize cybersecurity investments. A white paper published in 2011 by the U.S. Chamber of Commerce, the Center for Democracy and Technology and other industry groups urged the federal government to focus on cybersecurity incentives over mandates, warning that "a more government-centric set of mandates would be counterproductive to both our economic and national security." In April 2023, the Federal Energy Regulatory Commission approved a rule allowing utility companies to include cybersecurity spending as part of their calculation for settling rates. FERC acting Chairman Willie Phillips said at the time that financial incentives must accompany federal mandates "to encourage utilities to proactively make additional cybersecurity investments in their systems." While the FERC rule allows utilities to recover cybersecurity expenses through customer rates, the NSTAC model suggests providing tax incentives upfront so critical infrastructure operators pay less when they spend money on enhanced cybersecurity standards.


Continuous Delivery: Gold Standard for Software Development

In the context of CD, developers must be able to easily and quickly understand why a product or update has failed. Given that between 50% and 80% of updates to software fail, developers need to be able to rapidly identify the exact point of failure and resolve it. This reduction in incident resolution time — or bug fixing — is one of the significant benefits of developers consistently working toward the metric of releasability. This means that when problems arise, they are easy to fix and recovery cycles are quick. To meet increasingly quick development targets, developers need to find ways to reduce the time they spend on incident response and troubleshooting. To help with this, they need access to real-time insights that allow them to identify, diagnose and resolve any incidents as they arise. These insights can give developers an instant, digestible understanding of how changes affect their software development pipelines, even when changes may not be significant enough to cause an incident. These “change events” offer a trail of breadcrumbs through every change made to a product throughout its development cycle, allowing developers to see the direct effects of each update. 


Transitioning to memory-safe languages: Challenges and considerations

We encourage the community to consider writing in Rust when starting new projects. We also recommend Rust for critical code paths, such as areas typically abused or compromised or those holding the “crown jewels.” Great places to start are authentication, authorization, cryptography, and anything that takes input from a network or user. While adopting memory safety will not fix everything in security overnight, it’s an essential first step. But even the best programmers make memory safety errors when using languages that aren’t inherently memory-safe. By using memory-safe languages, programmers can focus on producing higher-quality code rather than perilously contending with low-level memory management. However, we must recognize that it’s impossible to rewrite everything overnight. OpenSSF has created a C/C++ Hardening Guide to help programmers make legacy code safer without significantly impacting their existing codebases. Depending on your risk tolerance, this is a less risky path in the short term. Once your rewrite or rebuild is complete, it’s also essential to consider deployment.


Personalised learning for Gen Z: How customised content is reshaping education

As no two students possess the same skills, learning gaps and future goals, a range of personalised learning methods is necessary. This includes adaptive and blended learning, together with student-directed and project-based learning. Thereby, students imbibe lessons more speedily and effectively while retaining them longer. Conversely, traditional learning is based on physical classroom learning and standard curricula. It’s also time-consuming and cumbersome, with a one-size-fits-all approach that overlooks individual needs. Given the numerous mandatory textbooks and reading material, it’s expensive, unlike the more cost-effective e-learning modules. Additionally, technology facilitates the delivery of customized content via small videos and other bite-sized content more suitable for tech-savvy Gen Zs. With instant access to information that facilitates shopping, travel and more, these youthful groups hold the same expectations regarding learning. As a result, Gen Zs like consuming information via videos, podcasts or personalised learning modules that may be accessed later. 


Agile Architecture, Lean Architecture, or Both?

Creating an architecture for a software product requires solving a variety of complex problems; each product faces unique challenges that its architecture must overcome through a series of trade-offs. We have described this decision process in other articles in which we have described the concept of the Minimum Viable Architecture (MVA) as a reflection of these trade-off decisions. The MVA is the architectural complement to a Minimum Viable Product or MVP. The MVA balances the MVP by making sure that the MVP is technically viable, sustainable, and extensible over time; it is what differentiates the MVP from a throw-away proof of concept. Lean approaches want to look at the core problem of software development as improving the flow of work, but from an architectural perspective, the core problem is creating an MVP and an MVA that are both minimal and viable. One key aspect of an MVA is that it is developed incrementally over a series of releases of a product. The development team uses the empirical data from these releases to confirm or reject hypotheses that they form about the suitability of the MVA. 


How generative AI will change low-code development

“Skill sets will evolve to encompass a blend of traditional coding expertise, along with proficiency in utilizing low/no-code platforms, understanding how to integrate AI technologies, and effectively collaborating in teams using these tools,” says Ed Macosky, chief product and technology officer at Boomi. “The combination of low code alongside copilots will allow developers to enhance their skills and focus on supporting business outcomes, rather than spending the bulk of their time learning different coding languages.” Armon Petrossian, CEO and co-founder of Coalesce, adds, “There will be a greater emphasis on analytical thinking, problem-solving, and design thinking with less of a burden on the technical barrier of solving these types of issues.” Today, code generators can produce code suggestions, single lines of code, and small modules. Developers must still evaluate the code generated to adjust interfaces, understand boundary conditions, and evaluate security risks. But what might software development look like as prompting, code generation, and AI assistants in low-code improve? “As programming interfaces become conversational, there’s a convergence between low-code platforms and copilot-type tools,” says Srikumar Ramanathan, chief solutions officer at Mphasis.


Is It Too Late for My Organization to Leverage AI?

The short answer is no, but a pragmatic approach to adopting AI is becoming increasingly valuable. ... The key to efficient AI implementation is caution and planning. Leaders must assess their enterprise’s organizational, operational, and business challenges and use those findings to guide an intelligent AI strategy.Organizationally, successful AI implementation requires interdepartmental collaboration and training. Stakeholders -- including leaders and the daily drivers of productivity -- should understand the benefits of AI implementation. Otherwise, employee anxieties or misinformation might impede progress. Operational challenges to AI deployment include inefficient manual processes and a lack of standardization. Remember, AI is not a silver bullet for resolving existing tech inefficiencies. Before implementation, leaders must assess their tech stack, ensuring that all relevant software is in conversation with one another. From a business perspective, unclear AI use cases are a recipe for disaster. AI and machine learning (ML) investments should have specific KPIs. Furthermore, all investments should take a phased approach that prioritizes a solid data foundation before deployment.


Has the CIO title run its course?

“It’s time for the rest of organizations to recognize there is not a single CIO role anymore but layers of CIOs,’’ he says. The chief of technology needs to be a digital leader “and that’s why the name is so important.” While acknowledging that every company is different, Wenhold says if he were on the outside looking in at a senior executive meeting, “the person sitting there with the CBTO title isn’t talking about keeping the lights on, and the internet connection up, and what technologies we’re using. They’re talking about how is the business absorbing the latest deployment into production.” The person responsible for keeping the lights on should be a director, he adds, and “I don’t see that role at the table.” Although technology’s role has been widely elevated in most companies across all industries, Wenhold believes it will take some time for other organizations to understand what the CBTO role can and should be. “I still believe we have a lot of work to do in the industry. The CIO name is more important to your peers than to the person holding the title,’’ he maintains. Sule agrees, saying that the CBTO title is effective because it helps to “blur the lines” between technology and business and instills a sense that everyone in Sule’s department is there to serve the business.


Japan Blames North Korea for PyPI Supply Chain Cyberattack

"This attack isn't something that would affect only developers in Japan and nearby regions, Gardner points out. "It's something for which developers everywhere should be on guard." Other experts say non-native English speakers could be more at risk for this latest attack by the Lazarus Group. The attack "may disproportionately impact developers in Asia," due to language barriers and less access to security information, says Taimur Ijlal, a tech expert and information security leader at Netify. "Development teams with limited resources may understandably have less bandwidth for rigorous code reviews and audits," Ijlal says. Jed Macosko, a research director at Academic Influence, says app development communities in East Asia "tend to be more tightly integrated than in other parts of the world due to shared technologies, platforms, and linguistic commonalities." He says attackers may be looking to take advantage of those regional connections and "trusted relationships." Small and startup software firms in Asia typically have more limited security budgets than do their counterparts in the West, Macosko notes. 



Quote for the day:

"After growing wildly for years, the field of computing appears to be reaching its infancy." -- John Pierce

Daily Tech Digest - October 03, 2023

How AI can be a ‘multivitamin supplement’ for many industries

It won’t replace humans in the same way that supplements don’t replace a healthy diet. Still, it will strengthen companies’ existing operations and fill in the gaps that are currently making work more burdensome for human laborers. ... It’s exciting to realize that there will soon be professions that we don’t even have names for yet. As the technology ages and matures and governing bodies create the necessary laws and regulations, our current state of uncertainty will transform into an exciting, bright new future of human-tech cooperation. We are already seeing this future take shape. For instance, MarTech companies are testing AI-powered fraud detection to supplement the work that human experts do to monitor traffic quality and transparency. This not only eases the human workload but helps companies save resources while getting better results overall. Similar benefits of human-AI collaboration can be seen in healthcare, with AI that can be trained to assist patients with recovery treatments or perform routine tasks in medical offices or hospitals, freeing nurses and doctors up to focus on patient outcomes. 


Banking on Innovation: How Finance Transforms Technological Growth for Decision Makers

Regulation is a sensitive topic for the financial industry. While the need for a certain degree of oversight is universally accepted, excessive regulation can stifle the very innovation that drives economic growth. On the other hand, too little regulation can open the doors to risk accumulation and financial crises. Striking this balance is one of the most challenging tasks that government leaders face. Policies must be evidence-based, derived from transparent risk- assessment models and economic simulations. Regulatory sandboxes could offer a safe environment for financial institutions to experiment with new services and products under the watchful eye of regulators, thereby fostering innovation while ensuring compliance. ... One of the most potent ways in which PPPs can contribute to revenue management is through asset monetization. Governments often sit on a wealth of underutilized assets, ranging from real estate to utilities. A PPP can unlock the value of these assets by involving private-sector expertise and investment. 


Microsoft Releases Its Own Distro of Java 21

Microsoft’s continuing support for OpenJDK is a strong indicator of how important Java is in the enterprise software space. “And the new features of Java 21 such as lightweight threads are maintaining Java’s relevance in the cloud native age,” said Mike Milinkovich, executive director of the Eclipse Foundation. “Being one of the first vendors to ship Java SE 21 support shows how focused Microsoft is in meeting the needs of Java developers deploying workloads on Azure.” Also, Spring developers will be pleased to know that Spring Boot 3.2 now supports Java 21 features. Many other frameworks and libraries will soon release their JDK 21-supported versions. “Microsoft has some of the best developer tool makers in the world — to have them add Java to the mix makes sense,” said Richard Campbell, founder of Campbell & Associates. “Of course, that happened a couple of years ago, and JDK 21 is just the latest implementation. In the end, Microsoft wants to ensure that Azure is a great place to run Java, so having a team working on Java running in Azure helps to make that true. What does it mean for the ecosystem? More choices for implementations of Java, better Java tooling, and more places to run Java fast and securely.”


Why embracing complexity is the real challenge in software today

The reason we can’t just wish away or “fix” complexity is that every solution — whether it’s a technology or methodology — redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable. This doesn’t mean the solution is poor or defective. 


Balancing Cost and Resilience: Crafting a Lean IT Business Continuity Strategy

Effective monitoring is the backbone of a resilient infrastructure. The approach should focus on: Filtering out the noise - Monitoring solutions need to ensure that only critical notifications are sent out, preventing information overload and ensuring that the right people are alerted promptly when critical events inevitably happen. Acting quickly and decisively - Time is of the essence during disruptions. IT, DevOps, SIRT, and even PR teams need to be well coordinated for various types of events. From security breaches to data center fires or even just mundane equipment failures, anything that might result in customer or operation disruptions will involve cross-team communications and collaboration. The only way to get better at handling these is to have documentation on what should be done, a clear chain of command, and practice drills. In conclusion, a comprehensive backup and recovery strategy is essential for businesses aiming for uninterrupted operations. While there are many solutions available in the market, it’s crucial to find one that aligns with your business needs. 


How do you solve a problem like payments infrastructure?

Today, banks need to be willing to adopt new technology to change, and this will involve working with a third-party service provider. Another roundtable participant added that as part of this process, it is imperative to utilise validation evaluation to recycle new enhancements. Otherwise, banks will end up with the belief that the improvements that were made are unique, but in fact, competitors will keep pace or even get ahead when it comes to the innovation game or enticing new customers. This banker revealed that they opted to not disconnect from their existing infrastructure, but instead chose a top layer architecture to process payments in a more efficient way. In line with this, the participant added that culture must be considered, because this is what brings together the different components that are needed and ultimately reveals when the time is right to change the systems. Providing background information, this Sibos attendee mentioned that 15 years ago, the bank considered whether it would be more cost effective to map local, regional, or global ISO 20022 messaging into existing architecture or to create a new platform that could work for the next 20 years. 


GenAI: friend or foe in fraud risk management?

Building high-performance fraud detection algorithms today is dependent on real-life customer and transaction data to train and validate the models, which has remained a constraint. GenAI can help with realistic synthetic data creation for model training and validation, scenario and fraud attack simulation to identify vulnerabilities and design controls to mitigate these risks. Customer due diligence (CDD) is a critical function in fraud prevention – be it new client onboarding or new credit approvals (loans, credit cards, increasing credit limits) for existing clients. GenAI can be a great tool to go through piles of KYC documentation and reference them with customer-filled forms and other subscribed data sources of the FI to come up with a CDD summary report. GenAI can also be used to analyse user communications with FIs – such as emails, chats, documents and product and service requests – to extract insights on financial behaviour, sentiment analysis for intentions and potential risks of fraud. Fraud investigations can also leverage GenAI for alert and dispute resolution by accessing different sources of information on the context and providing a summary of the case that will aid in its decisioning.


Weaving Cyber Resilience into the Strategic Fabric of Higher Education Institutions

There is no shortage of steps that institutions can take to bolster their cyber resilience and ensure that, should the worst happen, they’re prepared. A good place to start is by assessing the institution’s current level of resilience and looking for any gaps or obstacles. In many cases, Goerlich says, the key is simplification. For example, adopting a zero-trust security strategy can also improve a college or university’s ability to respond, maintain continuity and bounce back following an adverse event, he says. Another factor complicating resiliency for many institutions is overly complex network environments, particularly in the cloud. As colleges and universities clamor to embrace digital transformation and cloud networking, it’s not uncommon for their environments to grow to a degree that becomes unmanageable. But uncontrolled and unregulated cloud sprawl can have a serious impact on an institution’s resilience. Developing easy-to-follow approaches and processes — along with adopting simplified, automated and easy-to-use technology solutions — can make a significant difference, Goerlich says. 


How to make asynchronous collaboration work for your business

Asynchronous working can bring some benefits that synchronous work can't – most notably speed. “Real-time communication means everyone must be in the same place, or at least the same time zone, in order for work to happen. If workers need to wait for syncs to decide or act on something, it slows down the company as a whole and reduces its ability to compete,” says van der Voort. Asynchronous collaboration allows people to work at their own pace, and does not force them to wait for input from others. Morning people, evening people, midnight oil people, collaborating across geographies, can in some cases deliver higher quality results than forcing everyone to come together for a 10am video call. To get this working well, policies such as having core working hours for each staff member, and having very clear goals and anticipated outcomes for all meetings, can be incredibly useful. “One of the most significant and highly sought-after benefits asynchronous collaboration offers is a dramatic reduction in meetings,” argues Lawyer. “It allows team members to contribute in the least amount of minutes, freeing up time for other work.”


Securing the Evolution of Smart Home Applications

Very few in the cybersecurity community have forgotten one of the most noteworthy incidents, the Mirai Botnet, which took place back in 2017. Attackers behind the botnet infiltrated the site of well-known cybersecurity journalist Brian Krebs. The Distributed Denial of Service (DDoS) attack lasted for days, 77 hours to be exact. It involved 24,000 Mirai-infected Internet-of-Things devices, including personal surveillance cameras. Jumping ahead to this year 2023, in June the Federal Trade Commission (FTC) settled a case with Ring’s owner, Amazon. The online retailing giant agreed to pay the FTC nearly $31 million in penalties to settle recently filed federal lawsuits over privacy violations. The FTC alleged that Ring compromised customer privacy by allowing any employee or contractor to access consumers’ private videos. The FTC also claimed hackers used Ring cameras’ two-way functionality to harass and even physically threaten consumers – including children – if they did not pay a ransom. These types of incidents clearly illustrate how critical it is to secure devices like cameras in a smart home.



Quote for the day:

"Before you are a leader, success is all about growing yourself when you become a leader, success is all about growing others." -- Jack Welch

Daily Tech Digest - September 04, 2023

What happens when finops finds bad cloud architecture?

Cloud finops teams can evaluate the performance and scalability of cloud infrastructure. Monitoring key performance indicators such as response times, latency, and throughput can identify bottlenecks or areas where the current architecture limits scalability and performance. Since finops normally tracks this through money spent, it’s easy to determine exactly how much architecture blunders are costing the company. It’s not unusual to find that a cloud-deployed system costs 10 times more money per month than it should. Those numbers are jarring for most businesses. Remember, all that money could have been spent in other places, such as on innovations. ... However, there are more strategic blunders, such as only using a single cloud provider (see example above). Maybe it seemed like a good idea at the time. Perhaps a vendor had a relationship with several board members, or there were political reasons for the limited choices. Unfortunately, the company still ends up with a great deal of technical debt which could have been avoided.


The quantum threat: Implications for the Internet of Things

Quantum computing, though it might be a decade or two away, presents a threat to IoT devices that have been secured against the current threat and which may remain in place for many years. To address this threat, governments are already spending billions, while organisations like NIST and ETSI are several years into programmes to identify and select post-quantum algorithms (PQAs) and industry and academia are innovating. And we are approaching some agreement on a suite of algorithms that are probably quantum safe; both the UK’s NCSC and the US’ NSA endorse the approach of enhanced Public Key cryptography using PQA along with much larger keys. The NCSC recommends that the majority of users follow normal cyber security best practice and wait for the development of NIST standards-compliant quantum-safe cryptography (QSC) products. That potentially leaves the IoT with a problem. Most of these enhanced QSC standards appear to require considerable computing power to deal with complex algorithms and long keys – and many IoT sensors may not be capable of running them.


What is industry cloud?

Industry cloud platforms allow businesses operating in the same sector to share or sell data, technologies, and processes to each other. The potential benefits can be significant, as an industry cloud enables interrelated members of a supply chain to access insights derived from potentially expanded data sets. An industry cloud can offer companies an exciting opportunity to exploit existing data they are not leveraging in a constructive way. ... Joining an industry cloud can offer significant benefits for companies, but many may reflexively balk at the idea of sharing or selling data. Consequently, it’s important that a company has a supportive constituency when considering an industry cloud. Each type of vendor has its own challenges in developing an industry cloud platform. For industry clouds driven by supply chain leaders, the most important requirement will be reexamining tools and methodologies to meet the needs of less sophisticated supply chain participants. Avoiding the temptation to abandon the industry cloud and retreat to a standard cloud for internal use is also a challenge.


Why Instagram Threads is a hotbed of risks for businesses

Threads is very easy to both download and sign up for, as it integrates seamlessly with a user's Instagram account when first signing up for the platform. However, this seamless integration could pose security risks, according to a blog from AgileBlue. Instagram, Facebook, and now Threads are all owned by Meta and for many users, each of their Meta accounts share the same login credentials between each of the platforms. "This makes it much easier for malicious actors to access information as gaining access to just one account ultimately gives them access to all Meta accounts," the blog said. In fact, as of writing, only users with an Instagram account can create a Threads account, so if an individual wants to sign up for Threads, they will first have to create an Instagram account. "If an employee's Threads account is compromised, malicious actors can impersonate the employee to gather information or spread misinformation within their close circle," Guenther says.


With BYOD comes responsibility — and many firms aren't delivering

Management must learn and share the benefits of these systems, make it crystal clear how data will be handled, and put protection in place to ensure personal data remains personal. Communication is critical here. It's also critical in securing the inevitable weak point of any form of security protection — the users themselves. With that in mind, companies should invest in training staff in security awareness and encourage them to update devices as and when those updates appear. Companies should also set standards — and devices that don’t meet those standards, in terms of security protection, should not gain access to corporate systems. This is all common sense stuff, really. We know the security environment is extremely challenging — even police forces are regularly hacked. In that context, it makes total sense to think about how to manage the devices connected to your systems and to put in place the software, security, and user education it takes to protect your business environments. The cost of device management is relatively negligible compared to the consequences of a successful ransomware attack, after all.


Why Enterprise Architecture Must Drive Sustainable Transformation

To some, it may seem odd to present these as parallel, equivalent pressures on businesses. Surely, the continued viability of civilization as we know it should far outweigh any governmental or regulatory proposal in our thinking about the future? The importance of the changing regulatory environment, however, lies not just in its ability to trigger business action: it is a real opportunity for businesses to transform themselves to a more meaningful, consequential sustainability approach. A report co-authored by the WEF and Boston Consulting Group, ‘Net-Zero Challenge: The supply chain opportunity’, found that the supply chains of just eight sectors, including food, construction, and fashion, account for more than 50% of global emissions. It also found that 40% of the emissions could be abated with already-available measures like circular manufacturing and renewable energy. Even achieving net zero emissions in those supply chains, according to the report’s investigations, would only raise costs for end-consumers by 1%-4% on average.


Lean for the modern company

A strong esprit de corps among team members has also long been critical to support healthy growth and the creation of synergistic value, and the book emphasizes the importance of building a healthy culture able to support lean processes and outcomes. This section includes clever material on nurturing a culture of experimentation and discovery, and validating trust by constantly raising the bar on deliverables and expectations. May and Dominguez ground their principles in the core lean ideal of starting with value and working backward—focusing obsessively on improving operations to seamlessly deliver for the customer what they call the “Job to be Done.” The authors’ material on accelerating value creation recapitulates this goal and reminds readers to be vigilant about combating the inevitable waste generated by successful companies. As a writer about lean for nearly two decades, I’ve often been frustrated by misrepresentations of this dynamic system by management gurus who tout only thin-sliced elements of it.


4 Key Observability Best Practices

For cost reasons, becoming comfortable with tracking the current telemetry footprint and reviewing options for tuning — like dropping data, aggregating or filtering — can help your organization better monitor costs and platform adoption proactively. The ability to track telemetry volume by type (metrics, logs, traces or events) and by team can help define and delegate cost-efficiency initiatives. Once you’ve gotten a handle on how much telemetry you’re emitting and what it’s costing you, consider tracking the daily and monthly active users. This can help you pinpoint which engineers need training on the platform. ... Teams need better investigations. One way to ensure a smoother remediation process is through an organized process like following breadcrumbs rather than having 10 different bookmark links and a mental map of what data lives where. One way to do this is by understanding what telemetry your system emits from metrics, logs and traces and pinpointing the potential duplication or better sources of data.


Software Engineering in the Age of Climate Change: A Testing Perspective

Regression testing confirms that new code does not break existing functionality. Preventing regressions reduces the need for repeated testing and bug fixes, optimizing the software development lifecycle and minimizing unnecessary computational resources. ... Online education platforms introduce new features to enhance user experiences. Regression testing ensures these changes do not disrupt existing lessons or content delivery. By maintaining stability, energy is saved by minimizing the need for post-deployment fixes. Suppose a telecommunications company is rolling out a software update for its network infrastructure to improve data transmission efficiency and reduce latency. The update includes changes to the routing algorithms used to direct data traffic across the network. While the primary goal is to enhance network performance, there is a potential risk of introducing regressions that could disrupt existing services. Before deploying the software update to the entire network, the telecommunications company conducts thorough regression testing.


How to make your developer organization more efficient

Automating manual tasks and repetitive processes is crucial for increasing developer efficiency. “Employing automation for tasks that many engineers face throughout their SDLC helps to shift focus towards human value-add activities. This also increases overall delivery throughput, with higher confidence in our development lifecycle, and produces consistent processes across teams that would otherwise be handled one-off and uniquely” said Joe Mills. Developers can engage a team of automation experts to assess certain processes and tasks and help uncover automation opportunities. The team uses a hub-and-spoke model to scale their efforts across development teams at Discover and can help teams with robotic process automation, business automation, or code automation. ... In addition to these initiatives, engineers at Discover adhere to a set of practices, internally called CraftWorx, that define and direct the agile development process. Aligning engineers across these practices reduces friction because engineers and developers are following the same development practices.



Quote for the day:

"A leader takes people where they would never go on their own." -- Hans Finzel

Daily Tech Digest - July 01, 2023

CERT-In cyber security norms bar use of Anydesk, Teamviewer by govt dept

Cyber security watchdog CERTin has barred the use of remote desktop softwares like Anydesk and Teamviewer in the government department under new security guidelines released on Friday. The guidelines prescribe government departments use virtual private networks (VPN) for accessing network resources from remote locations and enable multi-factor authentication (MFA) for VPN accounts. "Ensure to block access to any remote desktop applications, such as Anydesk, Teamviewer, Ammyy admin etc," Guidelines on Information Security Practices for Government Entities said. CERT-In (Indian Computer Emergency Response Team ) said the purpose of these guidelines is to establish a prioritised baseline for cyber security measures and controls within government organisations and their associated organisations. Minister of State for Electronics and IT Rajeev Chandrasekhar in an official statement said the government has taken several initiatives to ensure an open, safe and trusted and accountable digital space.


Navigating Product Owner Accountability in Scrum: Debunking Myths and Overcoming Anti-Patterns

In a misguided attempt to ‘help’ Product Owners with their important responsibilities, some organizations establish two Product Owners for a single Product. However, while this may seem, at first, to be helpful, this actually causes a lot of problems for both Product Owners involved. When multiple Product Owners exist, conflicting ideas and visions may arise, diluting the product's direction and impeding progress. ... Instead, the Product Owner can delegate tasks such as creating Product Backlog items, maintaining the roadmap, or gathering metrics to Developers on the Scrum team. However, it is important to note that while the Product Owner may delegate as needed, the Product Owner ultimately remains accountable for items in the Product Backlog as well as the product forecast or roadmap, thus ensuring that there is a single, unifying vision and goal for the product and that the Product Backlog is in alignment with that vision. If the Product Owner is delegating the creation of Product Backlog items to Developers, what does that mean? 


Cisco firewall upgrade boosts visibility into encrypted traffic

“What our competitors are saying is ‘just decrypt everything.’ But we know in the real world, customers refrain from doing that due to data privacy concerns and to meet legal/compliance requirements. Furthermore, decrypting and re-encrypting data requires technical prowess not everyone has, increases the attack surface, and also causes severe performance challenges,” Miles said. EVE works by extracting two primary types of data features from the initial packet of a network connection, according to a blog written by Blake Anderson, a software engineer in Cisco’s advanced security research group. First, information about the client is represented by the Network Protocol Fingerprint (NPF), which extracts sequences of bytes from the initial packet and is indicative of the process, library, and/or operating system that initiated the connection. Second, it extracts information about the server such as its IP address, port, and domain name (for example a TLS server_name or HTTP Host). 


Scrum vs. Kanban vs. Lean: Choosing Your Path in Agile Development

While Scrum is commonly associated with software development teams, its principles and lessons have broad applicability across various domains of teamwork. This versatility is one of the key factors contributing to the widespread popularity of Scrum. Scrum is founded upon the concept of time-boxed iterations called sprints, which are designed to enhance team efficiency within cyclical development cycles. ... Kanban is well-suited for organizations seeking to embrace the benefits of agility while minimizing drastic workflow changes. It is particularly suitable for projects where priorities frequently shift, and ad hoc tasks can arise anytime. Kanban is a flexible methodology that can be applied to various domains and teams beyond software development. ... Lean methodology strongly emphasizes market validation and creating successful products that provide value to users. It is particularly well-suited for new product development teams or startups operating in emerging niches where a finished product may not yet exist, and resources are limited.


3 Ways to Build a More Skilled Cybersecurity Workforce

In addition to insights around highly sought-after skill sets and job titles, OECD's report also reveals that demand for cybersecurity professionals has spread beyond the confines of major urban centers. It calls for a more decentralized workforce to meet demand in underserved areas. ... If companies are to close the skills gap and meet the current demand for cybersecurity workers, they will need to broaden their horizons to account for more nontraditional cybersecurity career paths. In doing so, they will enhance the industry with a broader range of unique experiences and life skills. Recruiting more diverse candidates also allows companies to approach security challenges from different angles and identify solutions that may not have been considered otherwise. When a workforce is as diverse as the cybersecurity threats an organization faces, it can pull from a broader range of professional and personal experiences to more effectively and inclusively protect themselves and their end users.


AI's Teachable Moment: How ChatGPT Is Transforming the Classroom

"Teachers could say, 'Hey my students are really interested in TikTok,' then feed that to the AI," says Liu. "An AI could come up with three analogies related to TikTok that connect students to their needs and interests." Liu believes we absolutely need to acknowledge the immediate threats surrounding AI and its initial impact on teachers, particularly around skills assessments and cheating. One approach he takes is to speak openly with students and acknowledge that AI is the new thing and that we're all learning about it – what it can do, where it might lead. The more open conversations educators have with students, he says, the better. In the near term, students are going to cheat. That's impossible to avoid. YouTube and TikTok are bulging at the seams with tricks to help students avoid plagiarism trackers. In the medium term, Liu believes, we need to reevaluate what it means to grade students. Does that mean allowing students to use AI in assessments? Or changing how to teach topics? Liu isn't 100% sure.


Top 5 Benefits Of Blockchain Technology

Transparency within the Blockchain ecosystem refers to the open visibility of transactions, enabling all participants to validate and verify the recorded data. Unlike traditional systems that rely on centralized authorities, Blockchain operates on a decentralized network, where each transaction is recorded on a public ledger known as the Blockchain. ... Immutability is a cornerstone of Blockchain technology. It guarantees that once a transaction is recorded on the Blockchain, it becomes virtually impossible to alter or tamper with the data. This is achieved through a combination of cryptographic techniques and consensus mechanisms. Blockchain achieves data immutability by using cryptographic hashing. Each transaction is assigned a unique cryptographic hash, which is essentially a digital fingerprint. This hash is created by applying complex mathematical algorithms to the transaction data, resulting in a fixed-length string of characters. Furthermore, Blockchain relies on consensus mechanisms such as Proof of Work (PoW) or Proof of Stake (PoS) to validate and verify transactions.


Technical Debt tracking supports projects to “do it right”

For decades, there have been logs of outstanding bugs found in testing but not corrected before the project is implemented. The term technical debt adds the concept that there are consequences to those decisions, and that there are strong reasons to prioritize the follow-up to fix things and clear that list. Most of us are aware of workarounds that were left in place permanently and eventually cost too much. We may have seen a system with poor performance that slowed the work of key workers and/or was missing functionality that impacted the customer experience. All of these are important reasons that technical debt should be cleared up. There are other reasons too. Generally, people do not purposefully create poor designs or code with bugs. ... One of the interesting concepts that has been offered by Martin Fowler is the Technical Debt Quadrant that talks about the prudent but inadvertent technical debt that is created as we learn during a project and realize how the project should have been done.


Successful digital transformation requires simplistic thinking

While organizations are aggressively pursuing transformation goals, Chaudhry warned that antiquated mindsets and a range of internal factors can seriously inhibit innovation and prevent businesses from achieving their goals. Most notable among these is a complacent culture among some IT leaders who are stuck in a loop of traditional, outdated practices. “IT plays the most important role in driving transformation. You play the most important role, but you also need to act fast to drive change,” he said. “You can’t sit back and say ‘this is how things have been done for the last 30 years, so let’s keep doing so’.” ... Inertia, as he puts it, is a powerful inhibitor that locks IT leaders and organizations into an outdated mindset which prevents them from embracing change. “Inertia is powerful, and it holds you back because we are comfortable with what we’ve been doing for the last 10, 20 years or so,” he said. Research has often identified inertia as a common inhibitor in digital transformation, whereby teams are reluctant - or unwilling - to accept change.


Strategies to drive the Data Mesh cultural transformation

It’s important to have consistent and clear communication to ensure that everyone understands the reasons and the effects of change. Leaders must communicate the vision and benefits of Data Mesh. They also need to guide on how the new ways of working are going to be adopted through well-defined structures, roles and responsibilities for the new data product teams. To ensure data product ownership and accountability, defining clear KPIs and metrics for each data product team to measure success and track progress is critical. ... Rather than trying to adopt Data Mesh all at once, organizations can start with small pilot projects and gradually expand. This approach can help understand how processes defined in vitro work in real life. It also comes with lessons learned which help followers avoid the initial mistakes. ... This ensures that everyone in the organizationunderstands the new concepts and ways of working. It could include training sessions and coaching on Data Mesh, product thinking, design, user research, agile methodologies, cross-functional team collaboration, and data product ownership.



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - February 13, 2022

Software is eating the world–and supercharging global inequality

There’s no denying that the world is rapidly changing, with innovations such as artificial intelligence, robotics, blockchain, and the cloud. Each of the previous three industrial revolutions, including the most recent digital revolution, led to economic growth and helped eliminate mass poverty in many countries. However, these moments also concentrated wealth in the hands of those that control new technologies. VCs will play an increasing role in determining what tech factors into our daily lives over the next ten years and we must ensure that technology is used to modernize antiquated industries and can create a better standard of living worldwide. Ensuring that this new wave of technology benefits as many as possible is the challenge of our generation, especially considering that the pending climate crisis will disproportionately impact lower-income and marginalized communities. Bitcoin farms do not benefit maize farmers in Lagos who face deadly floods but VCs' obsession with crypto generates outsized investment and the wealthy get wealthier.


How open source is unlocking climate-related data's value

One of the barriers to assessing the cost of risks and opportunities in climate-change research is the lack of reliable and readily accessible data about climate. This data gap prevents financial sector stakeholders and others from assessing the financial stability of mitigation and resilience efforts and channeling global capital flows towards them. It also forces businesses to engage in costly, improvised ingestion and curation efforts without the benefit of shared data or open protocols. To address these problems, the Open Source Climate (OS-Climate) initiative is building an open data science platform that supports complex data ingestion, processing, and quality management requirements. It takes advantage of the latest advances in open source data platform tools and machine learning and the development of scenario-based predictive analytics by OS-Climate community members. To build a data platform that is open, auditable, and supports durable and repeatable deployments, the OS-Climate initiative leverages the Operate First program. 


Multicloud Strategy: How to Get Started?

Managing security in the cloud is a daunting task, especially for a multicloud. For this reason, the recommended approach is to have Cloud Security Control Framework early in any cloud migration strategy. But what does this mean? Today the best practice is to change the security mentality from perimetral security to a more holistic approach, which considers cybersecurity risks from the very design of the multicloud deployment. It starts by allowing the DevSecOps team to build/automate modular guardrails around the infrastructure and application code right from the beginning. And you can consider these guardrails as cross-cloud security controls based on the current trend of implementing Zero Trust networking architecture. Under this new paradigm, all users and services are “mistrusted” even within the security perimeter. This approach requires a rethinking of access controls since the workloads may get distributed and deployed across different cloud providers. Implementing security controls at all levels is a key, from infrastructure to application code, services, networks, data, users’ access, etc.


The future of enterprise: digging into data governance

Enterprise technology is always moving forward, and so, as more businesses move to a cloud-focused strategy, the boundaries of what that means are evolving. New models such as serverless and multi-cloud are redefining the ways in which companies will need to manage the flow and ownership of their data, and they’ll require new ways of thinking about how data is governed. According to Syed, these new models are going to make even more important the ability to decentralize data architecture while maintaining centralized governance policies. “A lot of companies are going to invest in trying to figure out, ‘How do I build something that combines not just my one data source, but my data warehouse, my data lake, my low-latency data store and pretty much any data object I have?’ How do you bring it all together under one umbrella? The tooling has to be very configurable and flexible to meet all the different lines of businesses’ unique requirements, but also ensure all the central policies are being enforced while you are producing and consuming the data.” 


Cloud security training is pivotal as demand for cloud services explode

Organizations can be caught out by thinking that they can lift-and-shift their existing applications, services and data to the cloud, where they will be secure by default. The reality is that migrating workloads to the cloud requires significant planning and due diligence, and the addition of cloud management expertise to their workforce. Workloads in the cloud rely on a shared responsibility model, with the cloud provider assuming responsibility for the fabric of the cloud, and the customer assuming responsibility for the servers, services, applications and data within (assuming an IaaS model). However, these boundaries can seem somewhat fuzzy, especially as there isn’t a uniform shared responsibility model across cloud providers, which can result in misunderstandings for companies that use multi-cloud environments. With so much invested in cloud infrastructure – and with a general lack of awareness of cloud security issues and responsibilities, as well as a lack of skills to manage and secure these environments – there is much to be done.


How the 12 principles in the Agile Manifesto work in real life

Leaders who work with agile teams focus on ensuring that the teams have the support (tools, access, resources) and environment (culture, people, external processes) they need, and then trust them to get the job done. This principle can scare some leaders who have a more command-and-control management style. They wonder how they'll know if their team is succeeding and focusing on the right things. My response to these concerns is to focus on the team’s outcomes. Are they delivering working product frequently? Are they making progress towards their goals? Those are the metrics that warrant attention. It is a necessary shift in perspective and mindset, and it is one that leaders as well as agile teams need to make to achieve the best results. To learn more about how to support agile teams, leaders should consider attending the Professional Agile Leadership - Essential class. Successful agile leaders enable teams to deliver value by providing them with the tools that they need to be successful, providing guidance when needed, embracing servant leadership and focusing on outcomes.


How to Pick the Right Automation Project

With the beginning and end states clearly articulated, you can then specify a step-by-step journey, with projects sequenced according to which ones can do the most in early days to lay essential foundations for later initiatives. Here’s an example to illustrate how this approach can lead to better choices. At a construction equipment manufacturer, there are three tempting areas to automate. One is the solution a vendor is offering: a chatbot tool that can be fairly simply implemented in the internal IT help desk with immediate impact on wait times and headcount. A second possibility is in finance, where sales forecasting could be enhanced by predictive modeling boosted by AI pattern recognition. The third idea is a big one: if the company could use intelligent automation to create a “connected equipment” environment on customer job sites, its business model could shift to new revenue streams from digital services such as monitoring and controlling machinery remotely. If you’re going for a relatively easy implementation and fast ROI, the first option is a no-brainer. If instead you’re looking for big publicity for your organization’s bold new vision, the third one’s the ticket.


Hybrid work and the Great Resignation lead to cybersecurity concerns

As the fallout of the Great Resignation is still being felt by many enterprises, there are four main concerns raised by Code42’s report. As 4.5 million employees left their jobs in November 2021 alone, this has created the first big challenge for industry leaders in protecting their data. Many employees leaving their roles have accidentally or intentionally taken data with them to competitors within the same industry, or even sometimes leveraged their former employers’ data for ransom. Business leaders are concerned with the types of data that are leaving, according to 49% of respondents, and 52% said they are concerned with what information is being saved on local machines and personal hard drives. Additionally, business leaders are more concerned with the content of the data that is exposed rather than how the data is exposed. Another major concern comes in the form of a disconnect when it comes to the problem of employees leaving in droves, creating uncertainty about ownership of data. Cybersecurity practitioners want more say in setting their company’s security policies and priorities to the company since they are dealing with the risks their employers face. 


Interoperability must be a priority for building Web3 in 2022

In addition to being time-consuming to build, once-off bridges are often highly centralized, acting as intermediaries between protocols. Built, owned, and operated by a single entity, these bridges become bottlenecks between different ecosystems. The controlling entity decides which tokens to support and which new networks to connect. ... Another impact of the siloed nature of the blockchain space is that developers are forced to choose between blockchain protocols, and end up building dapps that can be used on only one network, but not the others. This cuts the potential user base of any solution down significantly and prevents dapps from reaching mass adoption. Developers then have to spend resources deploying their apps across multiple networks which for many means to fragment their liquidity across their network-specific applications. From these struggles and drains on time and money, we know that a more universal solution for interoperability is the only way forward. Our industry, perhaps the most innovative in the world today and packed with the most talented minds, must prioritize the principles of universality, decentralization, security, and accessibility when it comes to interoperability. 


Better Data Modeling with Lean Methodology

Lean is a methodology for organizational management based on Toyota’s 1930 manufacturing model, which has been adapted for knowledge work. Where Agile was developed specifically for software development, Lean was developed for organizations, and focuses on continuous small improvement, combined with a sound management process in order to minimize waste and maximize value. Quality standards are maintained through collaborative work and repeatable processes. ... Eliminate anything not adding value as well as anything blocking the ability to deliver results quickly. At the same time, empower everyone in the process to take responsibility for quality. Automate processes wherever possible, especially those prone to human error, and get constant test-driven feedback throughout development. Improvement is only possible through learning, which requires proper documentation of the iterative process so knowledge is not lost. All aspects of communication, the way conflicts are handled, and the onboarding of team members should always occur within a culture of respect.



Quote for the day:

"Take time to deliberate; but when the time for action arrives, stop thinking and go in." -- Andrew Jackson

Daily Tech Digest - July 09, 2021

How quantum networking could transform the internet

If QC comes to full fruition (an event that is far from certain), conceivably, anyone with access to a quantum computer could decrypt a strongly encrypted communication in mere minutes. Who has access to a quantum computer? Would you believe you do? QC's first practitioners are making quantum systems available for programming by registered experimenters and researchers, no credentials required, in many cases for free. It's not easy, by any means, nor does it make much sense to rational people, but QC service (such as it is) is operational. If QN comes to full fruition (see above), some means of securing communications will become feasible again. But because of the physics principle, QN would leverage any "back door," such as some government agencies would ask network equipment makers to include, would be entirely impossible to engineer. What we have here, to paraphrase Strother Martin, is an open, collaborative, academic effort whose success would immediately trigger "failure to communicate" securely by conventional means. The intellectual property value of any digital encryption technology for classical systems would plummet like a spent fuel rod in a core meltdown.


The Bank of the Future Will Have Data Vaults and Money Vaults

Our belief is that the banks of the future will not necessarily be seen as banks or finance institutions. They will be seen more and more as data hubs or ecosystems for a much broader set of industry verticals. Not only will they provide better financial products and more relevant financial products consumers, but they will be in a perfect position to leverage data to form data alliances and data partnerships with other members of the ecosystem, such as grocery stores, airlines, energy companies, etc. And, it will be up to the user if they are willing to share their information or not. If they share, they will receive benefits and services from members of these alliances, which are complementary businesses that are sharing data based on the consent of the user to receive better, more relevant and higher impact service. ... There are a number of interesting interaction layers that enable banks to interact with their customers better. These could be conversational interfaces, like we have seen with chatbots. There are also voice assistants, like Google Home and Alexa.


Ransomware recovery: Plan for it now

A big ransomware attack is not the time to go it alone. There are resources available that will assist you halting and recovering when it feels like all hell is breaking loose, and there are steps to take that might help authorities catch the criminals. Part of your ransomware-response plan should include the contact information of these resources. If you have a cyber-insurance policy, it can be very helpful. It can put you in touch with specialists to help guide you through your response. Contact them now, before you are attacked, to establish their response process and document it in your plan. If you don’thave such a policy, consider getting one. You should also immediately contact the local field office of the FBI. Its level of involvement in a particular case will be driven by the extent and nature of the attack, but it says that notifying them of all attacks helps them to better respond to ransomware in general. They also have access to tools and resources unavailable to many other organizations that can help especially if it identifies another country as the source. When reaching out for help, beware companies that claim to decrypt the data for you.


Applying Lean Tools and Techniques to Scrum

Scrum teams are beginning to discover that Scrum as a framework can in fact be defined as wasteful. That waste is often masked by in-person interactions, which Scrum has always advocated for. While Scrum pairs well with other Agile frameworks and approaches, Lean, an approach which focuses on waste reduction, is rarely paired with Scrum for this to become visible. This has led to a lack of awareness or focus on the elements of waste that exist. The movement to remote work has brought a lot of changes outside of how a team operates to contend with. You have subpar facilities compared to an office. From simple lack of space, with some friends of mine working off of a kitchen table for over a year now, to not having appropriate ergonomic equipment at home to handle a 40-hour week. That’s before the added strain of other people or families in the house; in my case, I have a 2.5-year-old, an 18-month-old, and a newborn which brings an added dimension to the day! So off the bat, the environment changes are difficult to handle, but when you keep following Scrum as per the Scrum guide, you see a magnification of the issues which I am attributing to waste.


The mirrored world: What are the benefits of digital twins?

Intelligent twins have the power to turn data into actionable, big-picture insights, but only if fueled by complete and accurate data. COVID-19 revealed that enterprises cannot rely on historic data blindly -- they need to check and correct their models as the world changes. This approach requires a strategy for real-time data analytics that includes provisions for both IoT devices and sensors, and tools for data analysis and utilization. When this is achieved, enterprises can unlock a new way of understanding the business, and a new way of running it. Take Ericsson and Vodafone as examples. The two companies are working with e.GO, an electric vehicle manufacturer, to develop a factory of the future. In the factory, machines connect over a 5G network, sending data to a network operations center that powers a digital twin of the factory. The twin is then used to enable just-in-time processes and smart tools that empower human workers with data-driven intelligence. Intelligent twins have powerful simulation capabilities and, with your data foundation in place, they will let you reimagine the innovation process.


Three security lessons from a year of crisis

In the early days of the pandemic, when lockdowns and restrictions across the United States were at their height, contact centers and customer service teams saw huge spikes in call volume. Banks’ phones rang off the hook with questions about account numbers and Paycheck Protection Plan loans, about unemployment payments and mortgage suspensions. Business dealings that could once be conducted face-to-face with a teller in a branch office now moved onto the phone. Contact agents not deemed “essential” pivoted to working from home; new systems had to be devised, implemented, and tested almost overnight. The result? More calls, longer hold times, and longer agent conversations. Of course, not every fraudulent use of a contact center requires speaking with another human being: unscrupulous users can “mine” interactive voice response systems for personal data. In some industries, the second quarter of 2020 saw an 800% increase in year-over-year call volume. Such elevated numbers weren’t sustainable, and there’s been some drop-off in call length and volume, but in the last quarter of 2020, call durations were still up 14% compared to pre-COVID levels, and typical waits were 11 minutes longer than they were before the pandemic.


Who’s behind the Kaseya ransomware attack – and why is it so dangerous?

Hackers infiltrated Kaseya, accessed its customers’ data, and demanded ransom for the data’s return. Making the hack particularly grave, experts say, is that Kaseya is what is known as a “managed service provider”. That means its systems are used by companies too small or modestly resourced to have their own tech departments. Kaseya regularly pushes out updates to its customers meant to ensure the security of their systems. But in this case, those safety features were subverted to push out malicious software to customers’ systems. This hack was particularly egregious because the bad actors behind it had targeted the very systems typically used to protect customers from malicious software, said Doug Schmidt, a professor of computer science at Vanderbilt University. “This is very scary for a lot of reasons – it’s a totally different type of attack than what we have seen before,” Schmidt said. “If you can attack someone through a trusted channel, it’s incredibly pervasive – it’s going to ricochet way beyond the wildest dreams of the perpetrator.” Kaseya has said that between 800 and 1,500 businesses were affected by the hack, although independent researchers have pegged the figure at closer to 2,000.


Ransomware gangs seek people skills for negotiations

People with the appropriate – and not necessarily technical – skillsets to succeed in ransom negotiations are particularly valued, Kela found. “We observed multiple posts [on the dark web] describing a new role in the ransomware ecosystem, negotiators, whose purpose is to force the victim to pay a ransom using insider information and threats,” said Kivilevich. “Victims started using negotiators – while a few years ago there was no such profession, now there is a demand for negotiating services. Ransomware-negotiation specialists partner with the insurance companies and have no lack of clients. Ransom actors had to up their game as well, in order to make good margins. “As most ransom actors probably are not native English speakers, more delicate negotiations – specifically around very high budgets and surrounding complex business situations – required better English. When REvil’s representative was looking for a ‘support’ member of the team to hold negotiations, they specifically mentioned ‘conversational English’ as one of the demands. This is not a new case: actors are interested in native English speakers to use for spear-phishing campaigns.”


Facebook Launches Open Source Simulation Platform Habitat 2.0

Habitat 2.0 prioritises speed and performance over a wide range of simulation capabilities and allows researchers to test new approaches and iterate more effectively. For example, the researchers have used a navigation mesh to move the robot instead of simulating wheel-ground contact. However, the platform does not support non-rigid dynamics such as liquids, films, clothes, and ropes, as well as tactile sensing or audio. This, in a way, makes the Habitat 2.0 simulator two orders of magnitude faster than most 3D simulators available to academics and industry professionals. With these capabilities, researchers can now use the platform to perform complex tasks, such as clearing up the kitchen or setting the table. For instance, the platform can simulate a Fetch robot interacting in ReplicaCAD (a 3D dataset) scenes at 1,200 SPS (steps per second) while the existing platform runs at 10 to 400 SPS. It also scales well, achieving 8,200 SPS (273x real-time) multi-process on a single GPU and nearly 26,000 SPS (850x real-time) on a single node with 8 GPUs. Facebook researchers said such speeds significantly cut down on experimentation time, from six months to as little as two days.


The post-pandemic office: how to prevent burnout

The burnout we’re seeing now is as unique as the year we’ve just lived through, and there are several contributing factors. According to the Flex Study, almost a quarter of employees found themselves working longer hours while remote. Work often brought a welcome sense of normality, and provided distraction for those lucky enough to continue working, but the longer hours took a toll on many employees. As many as 37% of employees admitted to not taking a break of at least 30 minutes during their average work-from-home day. Burnout can stem from an inability to disconnect too, with 60% of UK respondents stating they are unable to switch off from work. Of those, 23% are anxious about getting caught up on work, 22% are distracted and still looking at their phone and 15% feel a sense of guilt. As businesses decide how they will work in the future, they will also need to address video fatigue as another root cause of burnout. Video calls have been the main tool for engaging with colleagues and customers over the past year, and many find this mentally exhausting. To create a more balanced schedule, businesses must set clear expectations about which meetings should be conducted via video, and which meetings can be audio only, as well as promoting meeting-free days altogether.



Quote for the day:

"Most of the successful people I've known are the ones who do more listening than talking." -- Bernard Baruch