Daily Tech Digest - October 10, 2022

7 reasons to love the Rust language—and 7 reasons not to

Much of programming language design today focuses on creating functional languages that guide the coder into writing software that’s easier to analyze. Rust is part of this trend. Many developers love Rust’s logical, functional syntax that encourages structuring their code as a sequence of nested function calls. At the same time, Rust’s creators wanted to build something that could handle the bit-banging, low-level programming required to keep IoT (Internet of Things) functioning. Rust offers the right combination for programmers looking to tackle these very real challenges with modern style. ... In some regards, learning Rust is a process of unlearning concepts and techniques you've likely followed from the beginning of your programming career. As an example, Rust requires abandoning the ideas of scope and ownership, which are required by older languages like JavaScript and Java. If you want to leverage Rust's benefits, you have to be willing to relinquish some familiar features that can lead to bugs. 


Board members should make CISOs their strategic partners

Awareness and funding do not translate into preparedness: although 75% of those surveyed feel their board understands their organization’s systemic risk, 76% think they have invested adequately in cybersecurity, 75% believe their data is adequately protected, and 76% discuss cybersecurity at least monthly, these efforts appear insufficient—47% still view their organization as unprepared to cope with a cyber attack in the next 12 months. Board members disagree with CISOs about the most important consequences of a cyber incident: internal data becoming public is at the top of the list of concerns for boards (37%), followed closely by reputational damage (34%) and revenue loss (33%). These concerns are in sharp contrast with those of CISOs, who are more worried about significant downtime, disruption of operations, and impact on business valuations. High employee awareness doesn’t protect against human error: although 76% of those surveyed believe their employees understand their role in protecting the organization against threats, 67% of board members believe human error is their biggest cyber vulnerability.


Platform Engineering, DevOps, and Cognitive Load: a Summary of Community Discussions

Reducing the cognitive pressure on development teams enables them to focus more readily on the core business code. Majors feels that "the more swiftly and easily developers can move, the better your platform team". In a recent Twitter thread, Majors elaborated on the relationship platform teams have with infrastructure and business code: Platform teams uniquely sit between these two tectonic plates -- infra code and business code, each moving at different speeds -- allowing other engineers to completely abstract infrastructure away. Majors draws a clear line between DevOps and platform engineering in stating "DevOps is about automation and managing infrastructure. Platform is about not having infra to run." This definition aligns to another statement made by Majors in that platform teams should focus on paying other people to run infrastructure, and conserve their development cycles for the development platform. Majors states that the goal of the platform team is to "run less software".


Hackers can guess your password using thermal imagery

Thermal attacks can occur after users type their passcode on a computer keyboard, smartphone screen or ATM keypad before leaving the device unguarded. A passer-by equipped with a thermal camera can take a picture that reveals where their fingers have touched the device. The brighter an area appears in the thermal image, the more recently it was touched and therefore the order sequence can be estimated. Previous research by Dr Mohamed Khamis, who led the development of the system, found that ThermoSecure could reveal 86 per cent of passwords when thermal images are taken within 20 seconds, dropping to 62 per cent after 60 seconds. They also found that within 20 seconds, ThermoSecure was capable of successfully guessing 67 per cent of long 16-character passwords. As passwords grew shorter, success rates increased – 93 per cent of eight-symbol passwords were cracked and all six-symbol passwords were successfully guessed. Another aspect which made it easier for ThermoSecure to guess passwords was the typing style of the keyboard users.


EU rolling out measures for online safety and AI liability

“The Digital Services Act is one of the EU’s most ground-breaking horizontal regulations and I am convinced it has the potential to become the ‘gold standard’ for other regulators in the world,” said Jozef Síkela, minister for industry and trade. “By setting new standards for a safer and more accountable online environment, the DSA marks the beginning of a new relationship between online platforms and users and regulators in the European Union and beyond.” Under the DSA, providers of intermediary services – including social media, online marketplaces, very large online platforms (VLOPs) and very large online search engines (VLOSEs) – will be forced into greater transparency, and will also be held accountable for their role in disseminating illegal and harmful content online. For example, the DSA will prohibit platforms from using targeted advertising based on the use of minors’ personal data; impose limits on the use of sensitive personal data for targeted advertising, including gender, race and religion; and introduce obligations on firms to react quickly to illegal content.


How to Prevent Turnover in DevOps Teams

Even though software engineers like to have a sense of ownership, we shouldn’t discourage flexibility—people easily become bored working on the same thing for years and years. There’s also the fallacy of sunk cost to keep in mind, which states that we tend to value things more because we’ve put more time and effort into them. Thus, providing flexibility to pivot when it makes sense can increase overall satisfaction and output. Accordingly, flexible management is also crucial to embrace pivots when they are necessary. For example, if a project is well underway but an engineer identifies a new solution that is more elegant, team leads should be open to recognizing and acting on changes. But to realize this sort of relationship, trust and openness must be bidirectional, said Sutter. If engineers can’t express their ideas or are afraid to tell their boss they’re wrong, these important conversations can’t happen. A flexible structure is also necessary to attract talent that prefers more modern work-life balance. 


Why Traditional Logging and Observability Waste Developer Time

To handle the many mechanisms and services newer applications used or offered, they were broken down into their own microlevel apps: microservices. Pulling all the components out of a monolith so each one could run more efficiently on its own obviously required a complex architecture to make them work together. Cloud native DevOps truncated the development cycle rather organically. Past monolith environments made replicating things in testing pretty simple. But with the cloud, there are too many moving parts. Each cog and gear — an instance, a container, the second deployment of some app — has its own configuration. Add in the exact conditions affecting some individual user experience or availability of some cloud resource, and you have rather irreplicable sets of conditions. Hence, devs need to anticipate more and more issues before full deployment, especially if they’re spinning out the process to another “as a service” provider (serverless in particular). If they don’t do this, late-stage troubleshooting will become overwhelming.


How to Budget Effectively for Multi-Cloud

The most effective approach to effective multi-cloud budgeting is to partner across your organization to understand workload plans, specifically regarding the cloud provider of choice, says A.J. Wasserman, product owner, Cloud FinOps, with Liberty Mutual Insurance. “This will provide a solid baseline for forecasting, which can then be used to drive budgeting,” she explains. “As you go through this process, it's important to attempt to segment the budget by cloud provider to understand how your actuals are tracking compared to the original budget.” The best approach to multi-cloud budgeting is to focus on a multi-year plan versus an annual budget to allow for both tactical and strategic considerations, Hoecker advises. Looking beyond budgeting and into financial operations, it's important to define a common tagging approach that can be applied consistently across clouds. This will enable common views, as well as the ability to compare cloud consumption and costs between cloud service providers, Potter says. “Cloud FinOps solutions can help provide real-time insight into cloud spend versus budgets, and alert relevant stakeholders early if costs are exceeding expectations,” he notes.


Email Defenses Under Siege: Phishing Attacks Dramatically Improve

Attackers are improving too because of the effort that cyberattackers make in collecting intel for targeting victims with social engineering. For one, they're utilizing the vast amounts of information that can be harvested online, says Jon Clay, vice president of threat intelligence for cybersecurity firm Trend Micro. "The actors investigate their victims using open source intelligence to obtain lots of information about their victim [and] craft very realistic phishing emails to get them to click a URL, open an attachment, or simply do what the email tells them to do, like in the case of business e-mail compromise (BEC) attacks," he says. The data suggests that attackers are also getting better at analyzing defensive technologies and determining their limitations. To get around systems that detect malicious URLs, for example, cybercriminals are increasingly using dynamic websites that may appear legitimate when an email is sent at 2 a.m., for example, but will present a different site at 8 a.m., when the worker opens the message.


A Guide to Process Mapping for Seamless Software Testing

Process mapping helps businesses and companies to be more efficient by providing insight into the processes of that business or company. Process mapping helps to identify bottlenecks, repetitions, and delays in the flow of a process, as well as helping to identify boundaries, responsibilities, effectiveness metrics, and set a schedule baseline. When mapping a process, you identify each step, draw each step using the appropriate shape or symbol, and show the flow by drawing arrows to connect the steps. This can be done by hand or using process mapping software. ... There are two ways process mapping can help software testers in coding and debugging: process mapping for debugging and process mapping for control flow and statistical analysis. Every software developer can tell you about the drudgery of debugging a piece of software. Developers can spend hours combing through code trying to find the piece that is generating an error or incorrect output.



Quote for the day:

"A throne is only a bench covered with velvet." -- Napoleon Bonaparte

Daily Tech Digest - October 09, 2022

7 ways to foster IT innovation

Asking your team to be innovative is like asking an athlete to play better. While it may feel motivational and instructive to say it, it’s most often taken as disapproving and vague to the person receiving it. So if you want people to innovate, define specifically what you’re looking for them to do. Think specificity. My definition of IT innovation: The successful creation, implementation, enhancement, or improvement of a technical process, business process, software product, hardware product, or cultural factor that reduces costs, enhances productivity, increases organizational competitiveness, or provides other business value. ... Building an innovative culture is not only people-oriented but process-oriented. You must develop a formalized process that identifies, collects, evaluates and implements innovative ideas. Without this process, great ideas and potential innovations die on the vine. There also has to be an appreciation and understanding that innovative ideas can come from many directions, including your employees, internal business partners, customers, vendors, competitors, or through accidental discovery.


Shift Left Approach for API Standardization

Having clear and consistent API design standards is the foundation for a good developer and consumer experience. They let developers and consumers understand your APIs in a fast and effective manner, reduces the learning curve, and enables them to build to a set of guidelines. API standardization can also improve team collaboration, provide the guiding principles to reduce inaccuracies, delays, and contribute to a reduction in overall development costs. Standards are so important to the success of an API strategy that many technology companies – like Microsoft, Google, and IBM as well as industry organization like SWIFT, TMForum and IATA use and support the OpenAPI Specification (OAS) as their foundational standard for defining RESTful APIs. ... The term “shift left” refers to a practice in software development in which teams begin testing earlier than ever before and help them to focus on quality, work on problem prevention instead of detection. 


3 ways CIOs can empower their teams during uncertainty

By understanding data from different parts of the business, CIOs are in a unique position to see first-hand what efforts are producing the highest return. They can also identify gaps in knowledge and efficiency. Data analytics provide information used to set goals and expectations that allow the company to adapt in real-time as priorities change. As data stewards, CIOs will determine the origin of the most relevant data points and must be able to present these to other C-suite executives to help them make the best-informed choices. ... As the face of the IT department, CIOs can set the tone for a company’s culture, both inside and outside the building’s walls. They can articulate why new digital technologies are implemented and foster a forward-thinking environment. Additionally, they can connect the day-to-day actions of IT with their greater strategic vision. ... CIOs can help drive enterprise agility by always putting the customer at the center of decisions. The CIO can collaborate closely with business leaders to understand the business priorities and then develop a plan for how technology can drive the most value for the customer.


In defense of “quiet working”

Leaders need to raise their game and do their part to make work more engaging and crack down on bad managers who make life miserable for their teams. They need to more clearly articulate how people can contribute and what is expected of them. Companies need to rethink the “why” behind return-to-office policies, for example, so they don’t just feel like ham-handed directives based on a lack of trust in employee productivity. This issue of quiet quitting is fraught, and I want to be clear that there is a balance of shared responsibility here. Bad bosses give their employees plenty of reasons to throw up their hands and disengage. Companies need to make work more engaging beyond just coming up with lofty purpose statements. But let’s also give a shout-out to the value of a strong work ethic. A lot of companies are making progress and doing their part to try to figure out the new world of work. And so are the #quietworking employees. Green’s story captures a quality I’ve always admired in many people: they own their job, whatever it is.


Cancer Testing Lab Reports 2nd Major Breach Within 6 Months

The narrow time span between CSI's two major health data breaches will potentially raise red flags with regulators, says Greene, a former senior adviser at HHS OCR. HHS OCR will often look at what actions the entity took in response to the first data breach and whether the multiple breaches were due to a similar systematic failure, such as a failure to conduct an enterprisewide risk analysis," he says. While there are definite negatives involving major breaches being reported within a short time frame, there can also be a sliver of optimism related to the subsequent incident. ... "While multiple breaches may reflect widespread information security issues, I have also seen it occur for more positive reasons, such as an entity improving already-good audit practices and, as a result, detecting more cases of users abusing their access privileges." ... "We believe the access to a single employee mailbox occurred not to access patient information, but rather as part of an effort to commit financial fraud on other entities by redirecting CSI customer health care provider payments to an account posing as CSI using a fictitious email address," CSI says.


Ransomware: This is how half of attacks begin, and this is how you can stop them

While over half of ransomware incidents examined started with attackers exploiting internet-facing vulnerabilities, compromised credentials – usernames and passwords – were the entry point for 39% of incidents. There are several ways that usernames and passwords can be stolen, including phishing attacks or infecting users with information-stealing malware. It's also common for attackers to simply breach weak or common passwords with brute-force attacks. Other methods that cyber criminals have used as the initial entry point for ransomware attacks include malware infections, phishing, drive-by downloads, and exploiting network misconfigurations. No matter which method is used to initiate ransomware campaigns, the report warns that "ransomware remains a major threat and one that feeds on gaps in security control frameworks". Despite the challenges that can be associated with preparing for ransomware and other malicious cyber threats – especially in large enterprise environments – Secureworks researchers suggest that applying security patches is one of the key things organisations can do to help protect their networks.


Landmark US-UK Data Access Agreement Begins

“The Data Access Agreement will allow information and evidence that is held by service providers within each of our nations and relates to the prevention, detection, investigation or prosecution of serious crime to be accessed more quickly than ever before,” noted a joint statement penned between Washington and London. “This will help, for example, our law enforcement agencies gain more effective access to the evidence they need to bring offenders to justice, including terrorists and child abuse offenders, thereby preventing further victimization.” However, legal experts have also warned that any UK service providers responding to requests from US law enforcers would have to consider whether there was a “legal basis” for data transfers under the GDPR. Data flowing the other way would not be subject to the same concerns given the European Commission’s adequacy decision regarding the UK. That said, Cooley predicted that OPOs would still come under intense legal scrutiny.


What IT will look like in 2025

To succeed, both now and as the future unfolds, CIOs will need to synthesize a range of technologies cohesively to deliver experiences, functionalities, and services to employees, partners, and most definitely customers. “When you think about 2025, our teams will continue to focus on serving both customers, internal and external, and to find ways to make our business better on a daily basis,” says Richard A. Hook, executive vice president and CIO of Penske Automotive Group and CIO of Penske Corp. “In addition, our teams will continue to evolve their skills to ensure everyone has at least a security baseline of knowledge (deeper depending on roles), increased depth on various cloud platforms and configurations, and the skills necessary to build automation within IT and the business.” ... “We see that leaders increasingly recognize the next phase of new value will come from transformational efforts — seeking to change their business models, finding new forms of digitalized products and services, new ways to reach new customer segments, etc.,” says Gartner’s Tyler.


Understanding Kafka-on-Pulsar (KoP): Yesterday, Today, and Tomorrow

To provide a smoother migration experience for users, the KoP community came up with a new solution. They decided to bring the native Kafka protocol support to Pulsar by introducing a Kafka protocol handler on Pulsar brokers. Protocol handlers were a new feature introduced in Pulsar 2.5.0. They allow Pulsar brokers to support other messaging protocols, including Kafka, AMQP, and MQTT. Compared with the above-mentioned migration plans, KoP features the following key benefits:No Code Change: Users do not need to modify any code in their Kafka applications, including clients written in different languages, the applications themselves, and third-party components Great Compatibility: KoP is compatible with the majority of tools in the Kafka ecosystem. It currently supports Kafka 0.9+ Direct Interaction With Pulsar Brokers: Before KoP was designed, some users tried to make the Pulsar client serve the request sent by the Kafka client by creating a proxy layer in the middle. This might impact performance as it entailed additional routing requests. By comparison, KoP allows clients to directly communicate with Pulsar brokers without compromising performance


How to Adapt to the New World of Work

Burnout is a real and serious issue facing the workforce, with 43% of employees stating they are somewhat or very burnt out. Burnout is a combination of exhaustion, cynicism, and lack of purpose at work. This burden results in employees feeling worn out both physically and mentally, unable to bring their best to work. It often causes employees to take long leaves of absence in an attempt to recover and is a key driver of turnover, as they seek new roles that they hope will reinvigorate their passion and drive. Sources of burnout might include overwork, lack of necessary support or resources, or unfair treatment. Feedback tools can help find the root cause of burn out and how to mitigate them. Implementing wellness tools are another way to address this issue and demonstrate that the company prioritizes mental health. Employees whose organizations provide wellness tools are less likely to be extremely burnt out. Currently, only 26% of tech employees say their company provides wellbeing support tools. 



Quote for the day:

"Leadership is the wise use of power. Power is the capacity to translate intention into reality and sustain it " -- Warren Bennis

Daily Tech Digest - October 08, 2022

How to manage IT infrastructure in a fast-growing company: the DataRobot experience

With Jamf, we offered a new form of employee communication with IT through the IT Self-Service application. In fact, it is a portal for company employees to change the status quo in established business processes within the company. Our position: IT Self-Service is an employee’s first IT companion and the first line of IT help. The main idea of this service is to create conditions to reduce the load on the IT-team and reduce the number of open tickets to HelpDesk. This means more efficient use of the company’s IT resources. ... Since classical DevOps engineers were at the origin of the company’s IT onboarding process automation, the scenario of computer preparation for onboarding was implemented with the world’s most popular DevOps configuration management system, Ansible. It’s written in Python using the declarative markup language YAML. The approach was respectable because it solved the problem of preparing computers for both macOS/Ubuntu platforms with a platform-dependent branching of the deployment script. 


How to make your APIs more discoverable

API discoverability is a key aspect of any API management initiative. The discoverability of an API directly impacts its adoption and usage. A typical big enterprise with multiple development teams might build hundreds of APIs that they would want to reuse internally or share with partners that build complementary applications. If the teams are not able to discover existing APIs, they might build a new API with the same functionality. It might lead to a duplication of efforts and underutilization of the existing API. It is also an unscalable practice to contact the API developer each time someone wants to use the API. There needs to be a better and more hands-off way for internal teams and partners to discover and understand the usage of these APIs without directly contacting the developers who built them. API discoverability does not just mean making it easy to find an API by providing an inventory. It should also address some key aspects that are important for an API consumer, such as understanding the API through documentation, request and response format, sign-up options, and the business terms and conditions (in case of a partner) of using the API.


The long-term answer to fixing bias in AI systems

Some of these [long-term fix] recommendations are hard. For instance, one way these systems get biased is they're obviously being run by for-profit organizations. The usual players are Google, Facebook and Amazon. They are banking on their algorithms trying to optimize user engagement, which on the surface seems like a good idea. The problem is, people don't engage with things just because they are good or relevant. More often, they engage with things because the content has certain kinds of emotions, like fear or hatred, or certain kinds of conspiracy. Unfortunately, this focus on engagement is problematic. It's primarily because an average user engages with things that are often not verified, but are entertaining. The algorithms essentially end up learning that, OK, that's a good thing to do. This creates a vicious cycle. A longer-term solution is to start breaking the cycle. That needs to happen from both sides. It needs to happen from these services, the tech companies that are targeting for higher engagement. They need to start changing their formula for how they consider engagement or how they optimize their algorithms for something other than engagement.


Great leaders ask great questions: Here are 3 steps to up your questioning game.

Having a good arsenal of questions at one’s disposal is a must for any leader, but the one staple of any leader is the open-ended question. Asking open-ended questions is like adjusting the lens of a camera, opening the aperture to create a wider field of view. This wider field sets a tone of receptivity, signaling that you are open to new information, in learning mode, and ready for a dialogue not a monologue. ... You may have heard the term active listening. It involves paying close attention to words and nonverbal actions and providing feedback to improve mutual understanding. But have you ever stopped to consider passive listening? Passive listening also involves listening closely to the speaker but without reacting. Instead, passive listening leaves space for silence. By combining both of these modes, we achieve what we call effective listening. ... One of the most powerful response techniques is the ability to ask questions. Questions frame the issue, remove ambiguity, expose gaps, reduce risk, give permission to engage, enable dialogue, uncover opportunities, and help to pressure-test logic.


The 10 Immutable Laws of Testing

The bug count measures what annoys our users the most - Bugs aren’t a measure of quality (that’s measured by things like fitness for purpose, reliable delivery, cost and other stuff). But bugs are what annoy our users most. If you don’t believe me, consider this: over 60% of users delete an app if it freezes, crashes or displays an error message. Cue P!nk. Bugs exist because we write them into our code: Complexity defeats good intentions - We all know where bugs come from: Developers writing code (enabled by users who want new functionality). Bugs are the visible evidence that our code is sufficiently complicated that we don’t fully understand it. We don’t like creating bugs and wish we didn’t do it and have developed some coping skills to address the problem … but we still write bugs into our code. Bugs (like tchotchkes) accumulate over time—every time we add or change functionality, to be precise - Everyone has an Aunt Edna where the inevitable result of her going out is that she brings home some new thing to put on a shelf. The inevitable result of creating software is more bugs (and, yes, more/better functionality). 


Reliable Continuous Testing Requires Automation

Automation makes it possible to build a reliable continuous testing process that covers the functional and non-functional requirements of the software. Preferably this automation should be done from the beginning of product development to enable quick release and delivery of software and early feedback from the users. ... We see more and more organizations trying to adopt the DevOps mindset and way of working. Velinov stated that software engineers, including the QA engineers, have to care not only about how they develop, test, and deliver their software, but also about how they maintain and improve their live products. They have to think more and more about the end user. Velinov mentioned that a significant requirement is and has always been to deliver software solutions quickly to production, safely, and securely. That’s impacting the continuous testing, as the QAs have to adapt their processes to rely mainly on automation for quick and early feedback, he said.


Seven Principles I Follow To Be a Better Data Scientist

Data science is an ever-changing field, thus keeping up with the latest trend and techniques is essential in ensuring consistent performance at work. For data scientists who keep a full-time job, it is unrealistic to spend weeks learning something new to be able to apply it to your working projects. We need to learn fast, and one way to achieve this is through learning by doing. Rather than getting lost in too many details and background information in a new concept, the fastest way to fully grasp it is to follow a trustworthy practical tutorial and replicate it, then try to make customized innovations to achieve better results in your projects. Take an example of learning the Random Forest algorithm. We sure need to know some basics about the algorithm — what it is, where it can be used, etc. Then we just use it in a current project, following some tutorials, and see what the results are. Blog posts with examples are great sources to educate yourself fast, compared to textbooks, or online courses. Lastly, we troubleshoot the results and look for ways to improve the application of the algorithm.


What Good Security Looks Like in a Cloudy World

When it comes to security issues and fixes, it is extremely important to be able to differentiate between new and old findings because this will also eventually affect the next two pillars: prioritization and remediation. One of the things DevSecOps tools have made possible is a real-time understanding of what’s happening in our code, with processes aligned with developer workflows, such as fixes at commonly accepted gates, like pull requests, and even earlier with precommit hooks or in-IDE alerts. A similar approach to the way we prevent issues from being merged into our code base through common CI gating can be applied to runtime-related tools during the CD phase. In this way, you can prevent runtime-related issues from reaching production, as well. So if we are able to discover security flaws while we’re still coding or in predeployment to production systems, these can be handled now and within the developer or operational context and need never go into the backlog. This is a very important distinction between our categories of security issues.


Avoiding the Top Mistakes Made by Tech Startups

Scaling too quickly increases a startup's burn rate, reducing the time it has to demonstrate key metrics for its next funding round and other milestone events, Yépez explains. Such a startup can also trash trusted customer relationships by failing to deliver goods or services as promised. “That burned cash won’t come back, and neither will that customer,” he cautions. Conversely, limited funding forces some struggling businesses to assign staff members tasks that fall outside of their skillsets. “These responsibilities often suffer from poor execution and may have severe consequences for the startup,” says Thomas Dolan, co-founder of 28Stone Consulting, an IT and fintech consulting firm. Many startups also neglect to protect their intellectual property. In their rush to go to market, some founders unwittingly disclose their core technology, or offer their core technology, to potential investors and other external parties. Such activity triggers deadlines for filing patent applications, says Kyle Graves, an attorney at law firm Snell & Wilmer.


Becoming “cloud smart” — the path to accelerated digital innovation

“Cloud chaos” comes from a landscape of unknowns. What is our enterprise cloud architecture? How do public and private clouds co-exist? What about edge computing? How do we align legal and compliance requirements in the multi-cloud world for heavily regulated industries such as fintech? Those daunting tasks and risks reflect the multi-cloud complexity and chaos we constantly live in. Having worked with many organisations transitioning away from “cloud chaos”, I see similar challenges regardless of the size of the business. It takes a vast amount of effort to architect and manage multi-cloud platforms. Think about scalability, interoperability, consistency, and a unified user experience. Think about the skill sets and knowledge required to build and operate cloud-native apps. Also, think about automating and optimising cloud management, architect cloud, and edge infrastructure. Think about connecting and securing apps and clouds. And finally, think about app security, legal, and compliance among other areas. These challenges keep CIOs up at night.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.

Daily Tech Digest - October 06, 2022

Addressing the Complexities of Cybersecurity at Fintech Enterprises

Effective IT governance is the cornerstone of cybersecurity as it is about leadership: how leaders treat IT as a cost-center vs. as an enterprisewide strategic asset. Governance is made more complex for central banks and regulatory and complex supervisory authorities due to regulation, supervision and compliance. There are many global models, frameworks and standards that can be referenced for complete cybersecurity governance and management, but ultimately, a mature organization chooses its own preferred guidance. The US National Institute of Science and Technology (NIST) Cybersecurity Framework (CSF).  the US Federal Financial Institutions Examinations Council (FFIEC) Cybersecurity Assessment Tool, the International Organization for Standardization (ISO) standard ISO 27000 and COBIT® are valuable resources for effective IT governance. These frameworks clearly describe roles and responsibilities of top management, importance of IT strategic alignment to achieve the enterprise objectives, importance of leadership and top management support to address IT and cybersecurity issues, importance of effective IT risk management, and proper reporting strategies.


CIO Guy Hadari on the management skills that set IT leaders apart

As Hadari sees it, “The challenge is that most up-and-coming IT professionals are trained to be technology implementers and innovators, and so are ill equipped for the management aspects of the job,” something that he experienced personally. In his first few years as CIO, Hadari’s comfort zone was data, analytics, and statistics, and that was the lens he used to lead IT. ... Hadari encourages his team to use data, surveys, and conversations to understand the perceptions of IT, and the problems that create those perceptions. He finds that comparing how IT rates itself to how the business rates IT reveals a great deal about where IT needs to focus. “Collecting all of that information is not an easy process, but it is the beginning of change,” says Hadari. “It means that we can accept our challenges, bring them out into the open, and do something about them.” At Biogen, Hadari’s extended leadership team, which is one level below his senior IT leadership team, owns the strategy and plan for IT improvement. “They build it, execute on it, and own it,” he says.
Different employee segments will require different messaging. The IT group will benefit from different messaging than the sales group. Don’t make the mistake, though, of believing IT employees don’t need security awareness—they do. Security teams should take steps to understand employees’ current comprehension of security messaging and where gaps may exist. And, of course, security awareness marketers need to understand the social and behavioral drivers of employee actions. What’s important to them? What motivates them? What are they concerned about? You can then create messaging to address employees’ pain points or motivators—to give them some reason to act, or not act, based on what they hear and learn. ... Security is a journey and a conversation, not a destination and a directive. Thinking like a marketer and taking steps to segment, understand and effectively connect with employees based on their needs, interests and concerns can help to better engage the organization in its cybersecurity efforts.


Young people in tech unhappy despite inclusion push

Almost half of younger people in the tech sector have at some point felt uncomfortable at work because of their gender, ethnicity, background or neurodivergence. Young people not already in the sector claimed they’re not confident about how to make tech their career, with a number of misconceptions about what is involved in a tech career still acting as a deterrent. Almost 15% of the young people asked who were not already in the sector said they know nothing about tech careers, with 29% believing they don’t have the right qualifications for a job in the sector. Women have more doubts about the sector than men – 23% of women believe their maths and science skills aren’t up to scratch enough for a tech job, compared with 13% of men; and 19% of women doubt they’re smart enough for the sector, compared with 13% of men. ... Only 5% of young people said that a lack of ethnic diversity is a deterrent to pursuing a tech career, although this varies based on the ethnicity of the person asked, with the breakdown being: 9% of young people from mixed raced backgrounds, 10% of people from an Asian background, and almost 36% of people from a black background.


4 Reasons Why Talent Development Is So Important To Your Business

In the age of employee turnover and the Great Resignation, organizations in nearly every field are finding it more difficult than ever to attract and retain top talent. As a leader, you need to make talent development a personal priority to stay competitive in recruiting and keeping the best people. Have a solid plan and communicate it widely to both prospective recruits and current employees. A truly thoughtful talent developing program lets people know how much you value them. It strengthens talent in new directions. Employees want to know that their leader sees their potential, and it’s important to be intentional about recognizing and reinforcing the strengths of your people. A one-size-fits-all approach to talent development isn’t good enough—you need to design a program for each individual based on their strengths, their goals and the organization’s needs. When you strengthen your talent, you strengthen your leadership. It improves productivity. According to a recent Gallup study, helping your employees make full use of their employees skills and strengths, and providing them with opportunities for growth and improvement, can make them up to six times more productive. 


8 ways to get out of a career rut

Consider the millennial who felt stuck at a small company with no room for growth. Or the older generation of workers who thought they should retire early because the future was so uncertain and accepting a complete shift to digital felt daunting. For Gen Z, the prospect of never meeting managers or colleagues – because of virtual interviews and remote jobs – was foreign and left some without a sense of belonging. Not only were we physically absent from workspaces, but many of us also struggled mentally with the sudden, enormous changes to our daily routines and goals. It became a time of contemplation, where many professionals began reassessing their careers (and lives). And the realization for many? They felt stuck. What are your options if you want to take a big leap out of your current situation? How do you find motivation, especially after a couple of very stressful years outside of your control? What inspires you to take on a new challenge?


The Dark Side of Open Source

Lack of interest, patience, and time; change of profession and creative differences are some of the issues that push developers to close an open-source project. But the biggest reason why developers quit is that they drain out of energy. People like John Resig, creator of jQuery, and Ryan Dahl, creator of Node.js, too have most likely exited from their respective OSS project because they couldn’t keep up with the energy it demanded. Fakerjs’ Mark Squires’ sentiment was understandable. It’s very difficult to offer non-paid work for a long period of time and at a certain point an open-source project can become more hassle than it’s worth. It also depends, of course, on your motivations for developing open-source software, but more on that later. The best open-source projects are typically those that are maintained by developers who are compensated for their work on them and can maintain a work-life balance. Those who can devote their entire attention on enhancing them.


Back to Basics: Cybersecurity's Weakest Link

Social engineering was a driver for hacking over 20 years ago and, apparently, we still haven't moved away from it. Adding insult to injury, successful social engineering isn't restricted to non-technical organizations. It's very plausible that an unsavvy user in a backwater government department might fall for social engineering, for example, but much less so someone working at a leading tech firm – and we see that both Uber and Rockstar Games were impacted by social engineering. At some point, as a cybersecurity practitioner with the responsibility of educating your users and making them aware of the risks that they (and by extension the organization) are exposed to, you'd think that your colleagues would stop falling for what is literally the oldest trick in the hacking playbook. It's conceivable that users are not paying attention during training or are simply too busy with other things to remember what someone told them about what they can click on or not. However, social engineering attacks have so consistently been in the public news – not just cybersecurity news – that the excuse "I didn't know I shouldn't click email links" is getting harder and harder to accept.


Cyber insurance explained: What it covers and why prices continue to rise

For technology and compliance lawyer Jonathan Armstrong, the most significant driver of change in cyber insurance is demand for financial protection from litigation against organizations in the wake of cyber incidents. “We have seen that an attack or breach can be followed in the next day or so by lawyers claiming that they are investigating litigation against the company that has been hit.” This issue has been under the spotlight recently in the Lloyd v Google case in the UK. Richard Lloyd alleged that Google collected data from around 4 million iPhone users between 2011 and 2012 regarding their browsing habits without their knowledge or consent for commercial purposes, such as targeted advertising. He looked to bring representative action on behalf of all affected individuals against Google for compensation, which Google opposed. The UK Supreme Court sought to establish whether such a claim for a breach of data protection legislation can succeed without distinctive personal damage and if claimants can bring group action on behalf of unidentified individuals, including people who may not even be aware that they were affected.


Achieving faster time-to-market with data management

When companies manage their product data efficiently, they can be flexible while launching their new products. With error-free product data of new items, brands can customise information as per the marketplace and the promotion period. PIM possesses high-quality product information that is scalable, and offers complete freedom to be deployed across any technology environment. Product data can be easily imported from various vendors in multiple file formats and mapped to a single point of truth. ... In the wake of technological advances, fluctuating consumer expectations, competitive pressures, and turbulent market dynamics, operational agility is vital to survive and succeed. Faster time-to-market is one of the parameters that determines business agility. To continuously deliver high-quality, novel, and faster services, companies need to deploy PIM, which enhances product information, and improves conversion rates and customer retention. Businesses can also make data-driven decisions and create joyous customer journeys with the available data. 



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - October 05, 2022

How edge computing will support the metaverse

Edge computing supports the metaverse by minimizing network latency, reducing bandwidth demands and storing significant data locally. Edge computing, in this context, means compute and storage power placed closer to a metaverse participant, rather than in a conventional cloud data center. Latency increases with distance—at least for current computing and networking technologies. Quantum physics experiments can convey information at a distance without significant delay, but those aren’t systems we can scale or use for standard purposes—yet. In a virtual world, you experience latency as lag: A character might appear to hesitate a bit as it moves. Inconsistent latency produces movement that might appear jerky or communication that varies in speed. Lower latency, in general, means smoother movement. Edge computing can also help reduce bandwidth, since calculations get handled by either an on-site system or one nearby, rather than a remote location. Much as a graphics card works in tandem with a CPU to handle calculations and render images with less stress on the CPU, an edge computing architecture moves calculations closer to the metaverse participant. 


Big Gains In Tech Slowed By Talent Gaps And High Costs, Executive Survey Finds

Survey participant, Andrew Whytock, head of digitalization in Siemens pharmaceutical division, crystallized the criticality of employee recruitment, training and retention, explaining, “It’s great having a big tech strategy, but employers are struggling to find the people to execute their plans.” In addition to growth needs, staffing problems extend to fortifying cybersecurity. Nearly 60% of respondents reported that cybersecurity objectives are behind schedule. When asked to identify the “internal challenges” driving delays, executives ranked “lack of key skills” and “cultural obstacles” highest. That’s inexcusable. Lax tech controls and strategy acceleration pressure make a dangerous mix. To thrive, “digitally mature” enterprises need top talent in supportive cultures to unlock the transformative value of their sizable IT modernization investments. ... Despite huge investments in job training and leadership development, broad business perspective remains a widespread skill gap.


How to design a data architecture for business success

“Data architecture is many things to many people and it is easy to drown in an ocean of ideas, processes and initiatives,” says Tim Garrood, a data architecture expert at PA Consulting. Firms need to ensure that data architecture projects deliver value to the business, he adds, and this needs knowledge and skills, as well as technology. However, part of the challenge for CIOs and CDOs is that technology is driving complexity in both data management and how it is used. As management consultancy McKinsey put it in a 2020 paper: “Technical additions – from data lakes to customer analytics platforms to stream processing – have increased the complexity of data architectures enormously.” This is making it harder for firms to manage their existing data and to deliver new capabilities. The move away from traditional relational database systems to much more flexible data structures – and the ability to capture and process unstructured data – gives organisations the potential to do far more with data than ever before. The challenge for CIOs and CDOs is to tie that opportunity back to the needs of the business.


What Is Cloud Orchestration?

Cloud orchestration is the coordination and automation of workloads, resources, and infrastructure in public and private cloud environments and the automation of the whole cloud system. Each part should work together to produce an efficient system. Cloud automation is a subset of cloud orchestration focused on automating the individual components of a cloud system. Cloud orchestration and automation complement each other to produce an automated cloud system. ... Cloud orchestration supports the DevOps framework by allowing continuous integration, monitoring, and testing. Cloud orchestration solutions manage all services so that you get more frequent updates and can troubleshoot faster. Your applications are also more secure as you can patch vulnerabilities quickly. The journey towards full cloud orchestration is hard to complete. To make the transition more manageable, you can find benefits along the way with cloud automation. For example, you might automate the database component to speed up manual data handling or install a smart scheduler for your Kubernetes workloads. 


Introducing post-quantum Cloudflare Tunnel

From tech giants to small businesses: we will all have to make sure our hardware and software is updated so that our data is protected against the arrival of quantum computers. It seems far away, but it’s not a problem for later: any encrypted data captured today can be broken by a sufficiently powerful quantum computer in the future. ... How does it work? cloudflared creates long-running connections to two nearby Cloudflare data centers, for instance San Francisco and one other. When your employee visits your domain, they connect to a Cloudflare server close to them, say in Frankfurt. That server knows that this is a Cloudflare Tunnel and that your cloudflared has a connection to a server in San Francisco, and thus it relays the request to it. In turn, via the reverse connection, the request ends up at cloudflared, which passes it to the webapp via your internal network. In essence, Cloudflare Tunnel is a simple but convenient tool, but the magic is in what you can do on top with it: you get Cloudflare’s DDoS protection for free; fine-grained access control with Cloudflare Access and request logs just to name a few.


What are the benefits of a microservices architecture?

The benefit of a microservice architecture is that developers can deploy features that prevent cascading failures. A variety of tools are also available, from GitLab and others, to build fault-tolerant microservices that help improve the resilience of the infrastructure. A microservice application can be programmed in any language, so dev teams can choose the best language for the job. The fact that microservices architectures are language agnostic also allows the developers to use their existing skill sets to maximum advantage – no need to learn a new programming language just get the work done. Using cloud-based microservices gives developers another advantage, as they can access an application from any internet-connected device, regardless of its platform. A microservices architecture lets teams deploy independent applications without affecting other services in the architecture. This feature will enable developers to add new modules without redesigning the system's complete structure. Businesses can efficiently add new features as needed under a microservices architecture.


Tips for effective data preparation

According to TechRepublic, data preparation is “the process of cleaning, transforming and restructuring data so that users can use it for analysis, business intelligence and visualization.” AWS’s definition is even simpler: “Data preparation is the process of preparing raw data so that it is suitable for further processing and analysis.” But what does this actually mean in practice? Data doesn’t typically reach enterprises in a standardized format and, thus, needs to be prepared for enterprise use. Some of the data is structured—like customer names, addresses and product preferences — while most is almost certainly unstructured—like geo-spatial, product reviews, mobile activity and tweets. Before data scientists can run machine learning models to tease out insights, they’re first going to need to transform the data, reformatting it or perhaps correcting it, so it’s in a consistent format that serves their needs. ... In addition, data preparation can help to reduce data management costs that balloon when you try to apply bad data to otherwise good ML models. Now, given the importance of getting data preparation right, what are some tips for doing it well?


Optimizing Isolation Levels for Scaling Distributed Databases

The SnapshotRead isolation level, although not an ANSI standard, has been gaining popularity. This is also known as MVCC. The advantage of this isolation level is that it is contention-free: it creates a snapshot at the beginning of the transaction. All reads are sent to that snapshot without obtaining any locks. But writes follow the rules of strict Serializability. A SnapshotRead transaction is most valuable for a read-only workload because you can see a consistent database snapshot. This avoids surprises while loading different pieces of data that depend on each other transactionally. You can also use the snapshot feature to read multiple tables at a particular time and then later observe the changes that have occurred since that snapshot. This functionality is convenient for Change Data Capture tools that want to stream changes to an analytics database. For transactions that perform writes, the snapshot feature is not that useful. You mainly want to control whether to allow a value to change after the last read. If you want to allow the value to change, it will be stale as soon as you read it because someone else can update it later.


Why IT leaders should embrace a data-driven culture

Data tells the story of what works – and perhaps more importantly, what doesn’t work – for your team. It provides a clear and unbiased picture of how new transformations are netting out and where opportunities lie to increase efficiency and value. Utilizing the right metrics reveals which innovations are most effective for the team, letting IT managers know how transformations are running. Focusing on these results helps organizations streamline business processes and leads to higher team productivity. It also puts IT leaders on the path to sunset legacy solutions that require large budgets or lots of manual work to keep them functional. These changes impact all business areas, allowing employees anywhere and everywhere – not just those in IT – to be more innovative and effective. ... As business leaders focus on meeting the needs of today’s evolving workforce and customers’ desires, operating with a data-driven strategy lets managers stay agile and confident in their next steps. Allowing data to drive decisions also provides a means to back those decisions with clear evidence.


Who is responsible for cyber security in the enterprise?

Alarmingly — or perhaps unfairly — only 8 per cent of executives said that their CISO or equivalent performs above average in communicating the financial, workforce, reputational or personal consequences of cyber threats. At the same time, under 15 per cent of executives gave their CISOs or equivalent a top rating from a scale of one to ten. Maintaining a bridge between business and tech is vital when it comes to ensuring all are on the same page regarding security. “It is no surprise that one of the main challenges companies face when implementing a cyber risk mitigation or resiliency plan is the communication gap between the board and the CISO,” said Anthony Dagostino, founder and CEO of cyber insurance and risk management provider Converge. “Cyber resiliency starts with the board because they understand risk and can help their organisations set the appropriate strategy to effectively mitigate that risk. However, while CISOs are security specialists, most of them still struggle with adequately translating security threats into operational and financial impact to their organisations – which is what boards want to understand.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley