Daily Tech Digest - January 06, 2023

2023 could be the year of public cloud repatriation

High cloud bills are rarely the fault of the cloud providers. They are often self-inflicted by enterprises that don’t refactor applications and data to optimize their cost-efficiencies on the new cloud platforms. Yes, the applications work as well as they did on the original platform, but you’ll pay for the inefficiencies you chose not to deal with during the migration. The cloud bills are higher than expected because lifted-and-shifted applications can’t take advantage of native capabilities such as auto-scaling, security, and storage management that allow workloads to function efficiently. It’s easy to point out the folly of not refactoring data and applications for cloud platforms during migration. The reality is that refactoring is time-consuming and expensive, and the pandemic put many enterprises under tight deadlines to migrate to the cloud. For enterprises that did not optimize systems for migration, it doesn’t make much economic sense to refactor those workloads now. Repatriation is often a more cost-effective option for these enterprises, even considering the hassle and expense of operating your own systems in your own data center.


How Cyber Pathways Can Help Your Career

The Cyber Pathways Framework will introduce chartered standards that align with 16 cybersecurity specialties. These job roles that have, until now, been loosely defined will be given specific descriptions and linked to existing qualifications and certifications to see the establishment for the first time of minimum requirements, as called for by the DCMS in its Understanding the cyber security recruitment pool report. On the plus side, this will set a bar to achieve certain roles, helping to standardize role requirements. This could prove helpful in the current climate where the demand for talent is leading to job creep, with job descriptions containing myriad skillsets. And it could prove fundamental in stopping the current job hopping that we’re seeing. But it will also see roles become more rigid, and given that the sector has always grown organically, there will need to be some provision made for the evolution of new roles as and when needed, such as ones involving AI and DevSecOps. Another key pathway proposal is the creation of a register for cybersecurity practitioners, similar to that seen in the medical, legal and accountancy professions, for senior positions.
 

The Cloud Computing Boom: A Test for Database Companies

The threat of the cloud also resonated with Shoolman, who said that “the cloud is all over”. For instance, a lot of Oracle’s business has been taken by cloud service providers. ... More importantly, he says, there is a change in the top five: what was dominated by the likes of IBM, SAP, and Oracle is now highly dominated by AWS, Azure, GCP, etc. The rest 20%, he says, is occupied by ISVs like Redis, MongoDB, and others. All of the big-tech cloud providers have come up with data services similar to those provided by these independent vendors to cater to their business needs. But, recognising that despite being their competition, ISVs have to depend on cloud platforms too to host their services, MongoDB said, “We are in a love-hate relationship with our cloud partners”. When Shoolman was asked if he sees more hyperscalers like AWS, Azure, and GCP coming up in future to meet organisation demands for massive scaling in computing, he said that we are indeed seeing a trend of developing more and more data centres since organisations intend to bring data much closer to the user.


Ukraine War and Upcoming SEC Rules Push Boards to Sharpen Cyber Oversight

The war, along with the hybrid work models that have been put in place at many companies as a result of the pandemic, prompted corporate directors to carefully consider how their companies might be exposed to cyber risks, said Andrea Bonime-Blanc, chief executive of GEC Risk Advisory LLC, a New York-based firm that advises boards and executives about cybersecurity and risk management. Board awareness of cybersecurity “was already increasing glacially, but I think the Ukraine war has sharpened the minds,” Ms. Bonime-Blanc said. Some boards now rate cyber threats on a par with trade wars and supply-chain problems among risks that could have major impact on companies, said Michael Hilb ... A communication gap between boards and security chiefs means neither side is as effective as needed to govern cybersecurity, said Yael Nagler, chief executive of Yass Partners, a consulting firm focused on aligning security leadership. Directors sometimes fail to understand core threats, Ms. Nagler said. “They’re not shy people but when it comes to cyber, they feel like they’re asking dumb questions,” she said.


Pro Coders Key to Stopping Citizen Developer Security Breach

Another Forrester prediction that may directly impact developers is their forecast that enterprise business leaders, not IT, will direct more than 40% of the API strategies. That goes against conventional wisdom that IT drives the API strategy, Gardner noted. “APIs have transcended from being just pure application or infrastructure APIs. There [are] now business APIs, there [are] ones that take advantage of data and take advantage of transactions, and essentially, enable the data economy,” he said “It’s not an IT conversation anymore. IT will make sure it stays secured and locked down and make sure that it’s tightly woven with everything else, but the business leader decides which ones are the most beneficial.” API strategy is even becoming a board-level topic, as board members and C-level leaders have grasped that APIs can be a central part of the business strategy, he said. That makes sense because the greatest value of APIs comes when organizations use them to create new products, business models, and channels, according to 


Today’s Software Developers Will Stop Coding Soon

Coding will never account for 100% of your time. Even junior ICs will have meetings to attend and non-coding tasks to complete. As you progress the IC ranks to senior and beyond, the non-coding work will grow. Apart from attending meetings, I’ve highlighted below a few prominent responsibilities that fall under this work. ... Begin doing this at any time, and never stop. No one is “not experienced enough” — even new hires can begin by filling in any conceptual gaps or “gotchas” they encountered while onboarding. ... Software engineering teams will always face skills gaps. You will lack some cross-functional support like a project or program manager, product designer, etc. As such, you may need to develop the skills of an adjacent role, like project management, to set deadlines and perform other duties associated with overseeing a project’s completion. ... As you gain experience no your team, you’ll be asked to onboard new hires including mentoring junior devs. Being a mentor will improve your communication skills and bolster your promotion packet.


Can the world’s de facto tech regulator really rein in AI?

The AI Act may also run into enforcement challenges. The regulation will apply mainly to companies or other entities developing and designing AI systems — not to public authorities or other institutions that use them. For example, a facial recognition system could have vastly different implications depending on whether it’s used in a consumer context (i.e., to recognize your face on Instagram) or at a border crossing to scan people’s faces as they enter a country. “We are arguing that a lot of the potential risks or adverse impacts of AI systems depend on the context of use,” said Karolina Iwanska, a digital civic space advisor at the European Center for Not-for-Profit Law in the Hague. “That level of risk seems different in both of these circumstances, but the AI Act primarily targets the developers of AI systems and doesn’t pay enough attention to how the systems are actually going to be used,” she told me. Although there has been plenty of discussion of how the draft regulation will — or will not — protect people’s rights, this is only part of the picture. 


Why Do Ransomware Victims Pay for Data Deletion Guarantees?

Many ransomware-wielding attackers are expert at preying on their victims' compulsion to clean up the mess. Hence victims often face a menu of options: Pay a ransom for a decryptor, and you'll be able to unlock forcibly encrypted data. Pay more, and your name gets deleted from the list of victims on a ransomware group's data-leak site. Pay even more and you get a promise that whatever data they've stolen - or already leaked - will be immediately deleted. Of course, many victims will feel the impulse to do something, anything, for the illusion that they can belatedly protect stolen data and salvage their reputation. That impulse is understandable. But it's not only too late, but also being used against them by extortionists. Psychologically speaking, criminals don't hesitate to find the levers that will compel a victim to act - as in, give them money. Most ransomware groups' promises are bunk, and most of all anything they guarantee that a victim cannot verify. Unfortunately, seeing victims pay for data-deletion promises isn't new. 


4 career paths for software developers on the move

One of the paths is architecture. “These roles are highly technical and are focused on designing, building, and integrating the foundational components of applications or systems,” Blackwell says. “This would include roles like technical/application architect, solution architect, or enterprise architect.” ... The move into devops is another common path for software developers. These positions are also highly technical, says Blackwell, and are focused on optimizing the tools, processes, and systems to build, test, release, and manage high-quality software in complex or high-availability environments. Devops roles include release manager, engineer, and architect. ... A third path is leadership. “Roles in this area require both good people skills and good technical skills,” Blackwell says. “And each, in their own way, is responsible for ensuring that teams have what they need to succeed, whether technical, process, tools, or skills.” Roles on the leadership path include scrum master, technical project manager, product manager, technical lead, and development manager.


A Skeptic’s Guide to Software Architecture Decisions

The power of positive thinking is real, and yet, when taken too far, it can result in an inability, or even unwillingness, to see outcomes that don’t conform to rosy expectations. More than once, we’ve encountered managers who, when confronted with data that showed a favorite feature of an important stakeholder had no value to customers, ordered the development team to suppress that information so the stakeholder would not "look bad." Even when executives are not telling teams to do things the team suspects are wrong, pressure to show positive results can discourage teams from asking questions that might indicate they are not on the right track. Or the teams may experience confirmation bias and ignore data that does not reinforce the decision they "know" to be correct, dismissing data as "noise." Such might be the case when a team is trying to understand why a much-requested feature has not been embraced by customers, even weeks after its release. The team may believe the feature is hard to find, perhaps requiring a UI redesign. 



Quote for the day:

"Effective team leaders realize they neither know all the answers, nor can they succeed without the other members of the team." -- Katzenbach & Smith

Daily Tech Digest - January 05, 2023

Singh calls herself a business information security partner, but the title most commonly employed for this role is business information security officer (BISO). People in these roles are responsible for one or more areas of the business and they usually report to the CISO or CTO, based on job descriptions found online and those laid out by multiple sources interviewed for this article. The people holding these roles also come from diverse educational and experiential backgrounds, at the core of which are strong familiarity with compliance regulations, solid cybersecurity foundations, and business acumen. ... Renee Guttmann, who’s been CISO to several Fortune 50 companies, says that the most important thing she looks for in a BISO is a thorough understanding of the business unit they support, which includes identifying the company’s “crown jewels”: what the most important assets are, where they are, and the targeted attacks to which they are potentially vulnerable. The BISO should be able to identify the risks and work with others, such as architecture and infrastructure managers, to prioritize risks.


DevOps: 3 steps to plan and execute a successful project

The preliminary stages of a DevOps project are crucial. Without clear direction and a shared understanding across the team, the initiative is doomed to failure. The team and the client must therefore be willing to dedicate the time necessary to understand each other’s goals and ensure their visions align. This can be done through meetings and workshops, where participants identify objectives and team members establish a clear goal for how the final product should look. When executed correctly, the DevOps team will exit the project’s first phase with a well-defined brief and a clear understanding of the client’s goals. If this step is rushed, engineers will be hindered by a lack of direction, increasing the likelihood that the finished product will not meet the client’s requirements. ... Phase two is when the development of the app begins. This is usually facilitated by using a cloud-based solution, where the team begins preparing the environment’s aesthetic, working out the components it should contain, and understanding how they should be configured to maximize efficiency.


5 Key Kubernetes Trends to Follow in 2023

Multi-cluster Kubernetes is important because it makes it feasible to separate workloads using not just namespaces, but entirely distinct clusters. Doing so provides more security and performance protections than you can get using namespaces, which offer only virtualized segmentation between workloads. As a result, multi-cluster Kubernetes makes Kubernetes more valuable for use cases that involve very stringent security requirements, or where it's critical to avoid the "noisy neighbor" problems that can happen when multiple workloads share the same hosting infrastructure. ... No one has ever accused Kubernetes of being an easy platform to use. On the contrary, you'll find plenty of warnings on the internet that Kubernetes is "hard," or even "damn complicated." But you could have said the same thing about Linux in the 1990s, or the Amazon cloud in the mid-2000s, when those platforms were new. Like other major technologies that preceded it, Kubernetes is still growing up, and there remains plenty of room to improve it by improving the platform's usability.


Moving Beyond Security Awareness to Security Education

“Unlike awareness, application security education is based on central principles or ‘big ideas’. If key security concepts are part of a continuous and programmatic education initiative, development teams can learn to apply knowledge, skills, and experience to novel situations and better secure applications,” said Baker. When application security principles are understood, developers can then not only identify when code isn’t quite right or spot something that creates risk, but also effectively design against it. “Awareness doesn’t go far enough for security-critical roles such as software developers, product and UX managers, quality assurance and scrum masters who are all responsible for delivering safe applications,” said Baker. “What’s needed is deeper education, and there are several ways that this can be incorporated into the awareness training mix.” First and foremost, Baker stated, the concept of continuous and programmatic security education—and why it matters for security-critical roles—requires buy-in from everyone within the organization. 


Blow for Meta: must stop serving personalised ads until it's GDPR compliant

According to nyob, ten confidential meetings took place between Meta and the DPC during the course of the proceedings, over which time the DPC came down on the side of the company and its bypassing of the standard GDPR rules for consent. Schrems has launched multiple successful legal campaigns against technology companies and their misuse of personal data. He said: "This case is about a simple legal question. Meta claims that the 'bypass' happened with the blessing of the DPC. For years the DPC has dragged out the procedure and insisted that Meta may bypass the GDPR, but was now overruled by the other EU authorities. It is overall the fourth time in a row the Irish DPC got overruled." Schrems claimed the DPC had refused to release the details of the decision to nyob and accused the regulator of playing "a very diabolic public relations game". He added: "By not allowing noyb or the public to read the decision, it tries to shape the narrative of the decision jointly with Meta. It seems the cooperation between Meta and the Irish regulator is well and alive - despite being overruled by the EDPB"


What is data ingestion?

At its simplest, data ingestion is the process of shifting or replicating data from a source and moving it to a new destination. Some of the sources from which data is moved or replicated are databases, files or even IoT data streams. The data moved and/or replicated during data ingestion is then stored at a destination that can be on-premises. ... Data ingestion uses software automation to move large amounts of data efficiently, as the operation requires little manual effort from IT. Data ingestion is a mass means of data capture from virtually any source. It can deal with the extremely large volumes of data that are entering corporate networks on a daily basis. Data ingestion is a “mover” technology that can be combined with data editing and formatting technologies such as ETL. By itself, data ingestion only ingests data; it does not transform it. For many organizations, data ingestion is a critical tool that helps them manage the front end of their data and data just entering their enterprise. A data ingestion tool enables companies to immediately move their data into a central data repository without the risk of leaving any valuable data “out there” in sources that may later no longer be accessible.


The Most Futuristic Tech at CES 2023

TCL's RayNeo X2 AR glasses are available for demo at CES 2023, and CNET's Scott Stein was able to use them to translate a conversation with a Chinese speaker in real time. The frames on the RayNeo X2 AR glasses are slightly bulkier than regular eyeglass frames, but prescription inserts eliminate the need to wear other glasses underneath, and the expected introduction of Qualcomm's AR1 chipset should reduce the size further. The RayNeo X2 AR glasses will be released to the developer community at the end of the first quarter of 2023, with a commercial launch set for later in the year. ... One of the more unusual prototypes shown at CES 2023 is a wearable neckband from a Japanese startup company called Loovic. The device hangs around your neck, sort of like studio headphones when not in use, and provides audio and tactile directions to help you navigate without looking at your phone. The device was inspired by Loovic CEO Toru Tamanka's son, who suffers from a cognitive impairment that makes following directions difficult. It will work for anyone who wants to receive navigation while keeping their head up. 


Kubernetes must stay pure, upstream open-source

Vendors may modify code for their custom distributions or the supporting applications you need to make Kubernetes run in production. While a modified version of Kubernetes will work with a particular vendor’s application stack and management tools, these proprietary modifications lock you into customized component builds preventing you from integrating with other upstream open-source projects without lock-in. And if their stack comprises multiple products, it’s very hard to achieve interoperability, which can cause lots of downstream issues as you scale. ... It’s incredibly difficult to merge back a fork that has diverged drastically over the years from the upstream. This is called technical debt – the cost of maintaining source code caused by deviation from the main branch where joint development happens. The more changes to forked code, the more money and time it costs to rebase the fork to the upstream project.


Managing Remote Workforces: IT Leaders Look to Expanded Suite of Tools

Ramin Ettehad, co-founder of Oomnitza, says organizations must connect their key systems and orchestrate rules, policies, and workflows across the technology and employee lifecycle, not with tickets and manual workloads, but rather with conditional rule-based automation of all tasks across teams and systems. He notes an example of a key business process that is challenged in a remote working world is employee onboarding and offboarding. “Companies want to make a positive first impression by offering new employees a consumer-esque onboarding experience,” he says. “This effort will maximize employee experience and time to productivity.” From the perspective of Dan Wilson, vice president analyst in the Gartner IT practice, some key tech tools for remote workforce management include remote Control for IT support to remotely see and interact with computers, as well as Unified Endpoint Management (UEM). “UEM lets IT departments discover, manage, configure devices, and deploy software and operating system updates without having to connect to the corporate network or VPN,” he explains.


Social Engineering Attacks: Preparing for What’s Coming in 2023

Impersonation and comment spam have exploded over the past year and will likely be some of the most prominent forms of phishing in 2023. This type of social engineering attack exploits the trust and recognition associated with influencers. Attackers create an account on a social media site that looks nearly identical to an influencer’s. The posts are often giveaway announcements, declaring that fans just need to “click this link” or “DM this account on Telegram” to collect their winnings. Instead, people are tricked into giving away money and are ghosted by the fake account. Impersonation and comment spam have become so serious on YouTube that prominent creators have asked the platform to address the issue. The scam results in monetary theft and hurts the reputation of the creators being impersonated. ... One peculiar new form of social engineering on the rise is reputation ransomware. This scare tactic exploits the headline nature of data breach announcements. The cybercriminal will demand ransom from the victim organization, threatening to “leak” news of a fictional data breach if they do not pay.



Quote for the day:

"Without continual growth and progress, such words as improvement, achievement, and success have no meaning.” -- Benjamin Franklin

Daily Tech Digest - January 04, 2023

AI is coming to the network

The dynamics of AI-infusing a network organization will, as with many other forms of automation, center on four modes of interaction: offloading, reskilling, deskilling, and displacing. AI offloading means putting AI tools at the command of trained and experienced networking professionals to help them do their work. The idea is to make network pros more effective by allowing them to offload tasks that are repetitive, complex, time sensitive, or require extremely high levels of focused attention, but that are not creative. This is supposed to free these scarce and precious resources to do other, higher-level work instead, while paying minimal and supervisory attention to what the AI is doing. (Human attention is the most precious resource in any IT shop.) The network team doesn’t shrink, and its portfolio of services can even grow without the team also having to grow to make that possible. Reskilling allows network staff to be trained to move into other parts of IT or into entirely different kinds of jobs. It also encompasses the idea of using AI to help train new network staff up to proficiency.


Distributed SQL: An Alternative to Database Sharding

Distributed SQL is the new way to scale relational databases with a sharding-like strategy that's fully automated and transparent to applications. Distributed SQL databases are designed from the ground up to scale almost linearly. ... In simple terms, a distributed SQL database is a relational database with transparent sharding that looks like a single logical database to applications. Distributed SQL databases are implemented as a shared-nothing architecture and a storage engine that scales both reads and writes while maintaining true ACID compliance and high availability. Distributed SQL databases have the scalability features of NoSQL databases—which gained popularity in the 2000s—but don’t sacrifice consistency. They keep the benefits of relational databases and add cloud compatibility with multi-region resilience. A different but related term is NewSQL (coined by Matthew Aslett in 2011). This term also describes scalable and performant relational databases. However, NewSQL databases don’t necessarily include horizontal scalability.


How layoffs can affect diversity in tech—and what to do about it

Although layoffs have dominated the conversation during the latter part of the year, evidence shows that the Great Resignation isn’t over yet. Online job site Hired found that attracting, hiring, and retaining top talent has proven to be difficult, citing employee burnout as a key challenge, placing the blame on rapid changes in the employment environment and angst over mass layoffs and hiring freezes. For companies yet to announce job cuts, Laman said that before any decision is made, organizations need to be sure they factor DE&I into decisions around layoffs. ... However, Williams argued that there's a lot of evidence to suggest that we pattern match when we try to spot potential, meaning that one of the really big risks from all these layoffs is that if you disproportionately have just one type of person represented at a leadership level making the decisions about who stays and who goes, they're not going to have understood or realize the potential of some people who look very different or are very different from them. Carver agrees, noting that being a good manager and being a good technologist are not one and the same, meaning people are often promoted despite lacking some necessary management skills.


How Global Turmoil and Inflation Will Impact Cybersecurity and Data Management in 2023

Rising geopolitical tensions between China, Russia, and NATO allies are responsible for increased cybersecurity threats. This will lead to companies tightening security measures in 2023. With healthcare, financial, defense, and public utility sectors facing new threats from politically motivated bad actors, the organizations with cloud-based IT operations should consider employing “data geofencing” through contractual agreements with their cloud providers -- many of which store data in global data centers -- to ensure data is kept within designated regions due to national security concerns and local legal requirements. Organizations in highly regulated industries must be on high alert to protect data and websites against DDoS attacks and phishing expeditions. Data management and cybersecurity professionals should work together to devise and execute new strategies that “meet the moment” and mitigate the potential for critical customer and corporate data eventually winding up on the Dark Web. One way data teams can support company security policies is by “flipping the script” on data asset management. 


Cyberattackers Torch Python Machine Learning Project

In the latest attack on PyTorch, the attacker used the name of a software package that PyTorch developers would load from the project's private repository, and because the malicious package existed in the PyPI repository, it gained precedence. The PyTorch Foundation removed the dependency in its nightly builds and replaced the PyPI project with a benign package, the advisory stated. ... Fortunately, because the torchtritan dependency was only imported into the nightly builds of the program, the impact of the attack did not propagate to typical users, Paul Ducklin, a principal research scientist at cybersecurity firm Sophos, said in a blog post. "We're guessing that the majority of PyTorch users won't have been affected by this, either because they don't use nightly builds, or weren't working over the vacation period, or both," he wrote. "But if you are a PyTorch enthusiast who does tinker with nightly builds, and if you've been working over the holidays, then even if you can't find any clear evidence that you were compromised, you might nevertheless want to consider generating new SSH key pairs as a precaution, and updating the public keys that you've uploaded to the various servers that you access via SSH."


Why it might be time to consider using FIDO-based authentication devices

Every business needs a secure way to collect, manage, and authenticate passwords. Unfortunately, no method is foolproof. Storing passwords in the browser and sending one-time access codes by SMS or authenticator apps can be bypassed by phishing. Password management products are more secure, but they have vulnerabilities as shown by the recent LastPass breach that exposed an encrypted backup of a database of saved passwords. For organizations with high security requirements, that leaves hardware-based login options such as FIDO devices. The FIDO (Fast Identity Online) standard is maintained by the FIDO Alliance and aims to reduce reliance on passwords for security. It does so by complementing or replacing them with strong authentication based on public-key cryptography. FIDO includes specs that take advantage of biometric and other hardware-based security measures, either from specialized hardware security gadgets or the biometric features built into most new smartphones and some PCs. That makes FIDO and other physical key or token methods more phishing resistant and harder for attackers to bypass. 


Why organizations tend to fall short on secure data management

Developing a more comprehensive structure for data classification by determining a piece of data’s value, its risk profile, or its level of sensitivity can improve understanding of the data retention period, thus informing data policy to help mitigate risk and reducing the attack surface for a potential breach. That means determining from the outset that data needs to get sanitized after a set time and through a set policy, rather than waiting until the asset it sits on is disposed. Equally, by thinking about the information lifecycle from the get-go, enterprises can make quick decisions on whether they should even have that data, and if not, they should erase it immediately with a certificate proving that the erasure has been successful. If data has only been held as part of a project, then when that project finishes the team should remove it from the infrastructure under that organization’s command. Classifying data appropriately can provide actionable insight to restructure policies and help employees better understand the information lifecycle management process.


7 downsides of open source culture

The word community gets thrown around a lot in open source circles, but that doesn’t mean open source culture is some sort of Shangri-La. Open source developers can be an edgy group: brusque, distracted, opinionated, and even downright mean. It is also well known that open source has a diversity problem, and certain prominent figures have been accused of racism and sexism. Structural inequality may be less visible when individuals contribute to open source projects with relative anonymity, communicating only through emails or bulletin boards. But sometimes that anonymity begets feelings of disconnection, which can make the collaborative process less enjoyable, and less inclusive, than it's cracked up to be. Many enterprise companies release open source versions of their product as a “community edition.” It's a great marketing tool and also a good way to collect ideas and sometimes code for improving the product. Building a real community around that project, though, takes time and resources. If a user and potential contributor posts a question to an online community bulletin board, they expect an answer. 


Is Silicon Valley's Unique Aura Fading Away?

The Silicon Valley mindset is about using technology to push for what’s possible, not what’s probable, says Shannon Goggin, co-founder and CEO of San Francisco-based benefits data platform provider Noyo. “It’s about taking big swings, building the future, and creating breakthroughs.” Yet after decades of tech dominance, doubts about Silicon Valley's long-term industry supremacy are beginning to appear. ... With a rise in remote work, the value that Silicon Valley employees once placed on a vibrant office life with trendy workspaces, elaborate on-site meals, and transportation, has faded, Jain observes. “People are now looking for solid employers who offer opportunities for collaboration and the ability to make a difference,” he says. “Silicon Valley tech firms are starting to take notice.” Thanks to emerging distributed company models, the Silicon Valley mindset will continue spreading to other areas, Goggin says. “The startup ecosystem is incredibly supportive, and I will be proud to see the next generation of companies create even more opportunity for people who haven’t historically had the chance to participate in the startup and tech ecosystem,” she says.


9 steps to protecting backup servers from ransomware

The backup server should not be connected to lightweight directory access protocol (LDAP) or any other centralized authentication system. These are often compromised by ransomware and can easily be used to gain usernames and passwords to the backup server itself or to its backup application. Many security professionals believe that no administrator accounts should be put in LDAP, so a separate password-management system may already be in place. A commercial password manager that allows sharing of passwords only among people who require access could fit the bill. MFA can increase security of backup servers, but use some other method than SMS or email, both of which are frequently targeted and circumvented. Consider a third-party authentication application such as Google Authenticator or Authy or one of the many commercial products. Backups systems should be configured so nearly no one has to login directly to an administrator or root account. For example, if a user account is set up on Windows as an administrator account, that user should not have to log into it in order to administer the backup system. 



Quote for the day:

"A tough hide with a tender heart is a goal that all leaders must have." -- Wayde Goodall

Daily Tech Digest - January 03, 2023

Security Top IT Investment Priority in 2023

Dennis Monner, chief commercial officer at Aryaka, says he thinks what IT leaders are finding is that the talent that they really need on their teams is in short supply. “The boundaries between the traditional, functional disciplines are getting fuzzy, requiring a new breed of security professional,” he explains. “The cloud team needs to understand the network. The network team needs to understand security. It’s driving them to rethink their investment and hiring strategy.” He adds recruiting, training, and retention all takes real dollars from the budget that could potentially be deployed in services that guarantee performance. “You can only outsource security to a certain degree,” Haff cautions. “Even if you're 100% in a public cloud, you're still largely responsible for your own application security, as well as your internal access and authentication procedures.” While a cloud provider can implement all manner of security tech and processes if you don't control who has access, those won't do much good. “It was somewhat disappointing that, although our survey generally showed investments in people was a high priority, ‘hiring security or compliance staff’ was one of the lowest security funding priorities,” he adds.


How biometric payments are tackling financial exclusion

Even the most reluctant individuals are likely to have succumbed to contactless payments and some form of digitised banking in recent times. This will have the positive impact of making the needed transition to biometrics more seamless. Using fingerprints or facial recognition to unlock phones or access apps is not unusual. If anything, they have been convenient and comforting additions to the surge of tech innovations over the last couple of decades. There is a relief in knowing that these portals are being secured by methods that are almost impossible to replicate. It is a breakthrough that financial players and governments in the world’s most developed countries still need to catch up with, as emerging economies have already capitalised on biometrics’ capabilities for almost a decade now. In India, for example, internal fraud and leakage from pension payments dropped by 47% after transitioning from cash to biometric smart cards. Because the solution bypasses the need for prior credit ratings or credentials, the country has also been able to catalyse safe online banking among previously unbanked adults since biometrics’ introduction in 2014.


Decentralised finance – a threat for traditional FS firms, or an opportunity?

Done right, DeFi offers traditional banks and financial services firms the ability to reduce costs, increase speed and attract new customers who are looking for simplified, more attractive, and secure solutions. When we look at the current payments ecosystem, we’re confronted with a maze of payments services, systems and rules which rely on a cacophony of different players. DeFi offers a solution to this inherent friction, delivering ecosystems than can run autonomously based on rules and verify transactions without human intervention. The main attractions of this innovation are two-fold. Firstly, it reduces inefficiency while eliminating fees, manual effort (e.g. for corporate actions) and intermediaries. Basic transactions can be executed at any time, from any place, with the only requirement having an internet connection and a compliant wallet. By removing the middleman in asset rights transfers, lowering exchange fees, and giving access to wider global markets, moving securities on blockchain could save between $17B and $24B in global trade processing costs.


Engineering Best Practices of CI Pipelines

The essence of a CI/CD system is to aim for green builds and to resolve issues quickly when a red build occurs, meaning a test failed. When the automated tests run, any failure results should be visible to all team members. Then, it should be a top priority for the team to make the build work again. Green builds and rapid fixes are critical for two reasons. First, when tests are failing, it is not possible to test forthcoming development and changes accurately. Secondly, continuous deployment will be halted, because no new and validated packages exist. Although it may seem like a frustrating situation to stop active development and instead focus on fixing failed tests, this mindset will ensure optimal application stability. An efficient CI/CD system should be the only path that leads to the production environment. In other words, if you have confidently built a CI/CD system with a comprehensive set of tests, there should be no other way to deploy applications to the production system. It can be highly tempting — and common — to maintain administrator privileges and deploy an application to the production systems just once. 


Blue-Green Deployment From the Trenches

The concept of blue-green deployment is to have (at least) two instances of an application running at one time. When a new version is released, it can be released to just one (or some) instances, leaving the others running on the old version. Access to this new version can be restricted completely at first, then potentially released to a subset of consumers, until confidence in the new release is achieved. At this point, access to the instance(s) running the old version can be gradually restricted and then these too can be upgraded. This creates a release with zero downtime for users. There are, of course, caveats. Any breaking change to data sources or APIs means that old requests cannot be processed by the new version, which rules out a blue-green release. It’s one of my favourite interview questions to ask how one might approach a breaking change in a blue-green environment on the off-chance that someone comes up with a great solution, but it would probably involve some bespoke routing layer to enrich or adapt "old" requests to the "new" system. At which point, you’d have to consider whether it isn’t better just to have some good old downtime. 


Top 10 AI Trends that Will Redefine Technology in the Year 2023

The role of AI and data science in innovation and automation will increase in 2023. Data ecosystems are able to scale, decrease waste, and provide timely data to a variety of inputs. But laying the foundation for change and fostering innovation is crucial. With the use of AI, software development processes can be optimised, and further advantages include greater collaboration and a larger body of knowledge. We need to foster a data-driven culture and go past the experimental stages in order to change to a sustainable delivery model. This will undoubtedly be a significant advancement in AI. ... Over the past few years, IT systems have become more sophisticated. Vendors will seek platform solutions that offer visibility across numerous monitoring domains, including application, infrastructure, and networking, according to a new Forrester prediction. ... The automatic modification of neural net topologies and improved tools for data labelling are two promising areas of automated machine learning. When the selection and improvement of a neural network model are automated, the cost and time to market for new solutions for artificial intelligence (AI) will be reduced.


How the EU plans to take on big tech in 2023

Increasing competition could leave gaps for European challengers to enter. The EU, however, has historically struggled to turn its world-leading research into big tech companies. One barrier is the notoriously slow and inefficient transfer of IP from academia to the economy. This problem is illustrated by the EU producing more research papers than the US, but turning far fewer into commercial applications. According to Luigi Congedo, a venture capitalist and Innovation Advisor at marketing firm Clarity, this weakness can be reduced by changing the EU’s investment framework. This, he argues, could stimulate a more effective technology transfer — and prevent promising startups from being acquired by Silicon Valley giants. “We need to create our Google, Facebook, and Microsoft, and, in order to do it, create a better environment to compete and do business across the continent,” he said. “If we fail in creating a real European platform for innovation and instead maintain the current ‘country-based model,’ all our emerging businesses will end up becoming M&A targets for American multinational companies.”


The limitations of mathematical modeling

Thompson believes these failures are often owing to misaligned incentives: “Those who correctly estimate significant tail risks [i.e., deviations from the normal distribution in a statistical model] may not be recognized or rewarded for doing so. Before the event, tail risks are unknown anyway if they can only be estimated from past data,” and “after the event, there are other things to worry about.” In short, it was in investors’ interest to design a model that characterized unlikely risks as infinitesimally so, and regulators weren’t paying attention. So why should we bother with models at all? Occasionally, Thompson believes, they do get it right. Her preferred example concerns research by two chemists, F. Sherwood Rowland and Mario Molina, who in the 1970s modeled the potential impact on the ozone layer of the continued release of chlorofluorocarbons, or CFCs. Within 15 years of their research, an international agreement, the Montreal Protocol, had been signed to limit CFC use, and it is now possible that the ozone layer could recover to its 1980 level by 2050. “The acceptability of the model was a function of the relatively simple context and the low costs of action,” Thompson explains


Business and tech leadership predictions 2023

With remote working on the rise – despite some companies attempting to go back to the office – global hiring will continue to increase. More and more people will be able to work in digital jobs that can be done from anywhere. “When you hire internationally, you have access to a much larger talent pool, and with the possibility of hiring employees to work from anywhere in the world, companies will have a unique opportunity of filling their roles in a more diverse way to increase cross-cultural competency in remote teamwork,” says Kelvin Ong, chief of staff at online software engineering school Microverse. However, Ong agrees with James Wilknson, that this means IT managers will have to develop their soft skills, such as explicit and clear written communication (“low-context communicaiton” and sending messages where there is a timelag before you get a response (“asynchrous communication”). ... Hedley says: “Most recessions are mild and temporary. While they are not fun, recessions can be endured. Second, business owners can, to a large extent, control their own destiny. And that’s especially true when it comes to identifying and hiring the talent that will move the needle.”


How to be the manager your IT team needs in 2023

Authenticity is important in creating high-performing teams because it lays the groundwork for strong relationships and environments in which employees can bring their whole, best selves to work. Being authentic doesn’t mean bearing all your darkest secrets, but it does mean understanding your own personal style and drivers and helping your team understand those. Humans are wired for consistency, so when you show up consistently and authentically, your employees know what to expect, how to approach you, and what’s important. Better still, they feel they have space to share who they are and what drives them. ... Perhaps the most important tip, though, is to be present when you are with your team. Shifting to all virtual work over the last couple of years has taken a toll on our ability to focus in the moment. We are constantly typing emails while listening to conference calls or responding to chats and texts while also trying to write articles or create solutions for clients. The pressure to multitask is great, but the benefits of focus and attention are even greater.



Quote for the day:

"Be so good at what you do that no one else in the world can do what you do." -- Robin Sharma

Daily Tech Digest - January 02, 2023

5 ways CIOs will disappoint their CEOs in 2023

Promise #1: The cloud will save money. Disappointment: It never did, and still won’t Why it won’t: You can buy servers as cheaply as the cloud providers, and they need to add a profit margin when they charge you for using them. What you should promise instead: Unlike on-premises infrastructure, the cloud lets IT easily add capacity in small increments when demand requires it. And — and this is the biggie — it also lets IT shed capacity when it’s no longer needed. The result? When demand is seasonal or unpredictable the cloud truly does save money. But when demand is steady, or increases in demand are predictable, on-premises infrastructure costs less. In the cloud, fixed costs are small but incremental costs are big. The costs of on-premises systems are the opposite. ... Promise #4: ‘Agile’ means no more big-project failures. Disappointment: Your name will be on some miserable Agile project failures this year What’s going to go wrong: Your company is going to make three Agile mistakes. The first, and worst, is that it won’t lose the habit of insisting on multitasking — developers will still be asked to juggle multiple competing projects, and their top priority will still be the next phone call.


Digital transformation: 4 security tips for 2023

Cybersecurity training keeps employees, customers, and vendors safe from cyberattacks. Take the initiative to seek out top-of-the-line training resources that will walk you through every aspect of promoting a secure environment. Training does not need to be expensive. Learn how to avoid data breaches, cultivate a security-first mindset, and maintain airtight security. While no measure can prevent a cyberattack entirely, proper training can help minimize your risk and reduce the chance of a breach. In addition, continue to sweat the small stuff. While one weak password or phishing email may not seem like a big deal, it’s in your best interest to take every threat seriously. Implement strong password complexity controls and policies, develop and maintain phishing campaigns, track user activity, and create policies for sharing information on the internet. For example, posting information on social media could reveal answers to common security questions. Staying vigilant will help your organization avoid trouble in the future.


Wireless electronics can power trillions of IoT sensors. Here's how

We are yet to witness the full potential of IoT, but before that, we need to overcome a big challenge. The sensors that make IoT networks possible require power to stay functional, and unfortunately, our existing energy solutions are not enough to support this demand. A team of researchers at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has been working on this problem and in their latest study, they propose an interesting solution. The authors reveal details about wireless-powered electronics that promise to meet the energy demands of IoT networks in a sustainable and eco-friendly manner. Sensors are currently powered by technologies like Li-ion batteries. Although batteries can power a large network of devices, they need to be replaced again and again. Therefore a battery-based approach is expensive, unsustainable, and harmful to the environment. For instance, conventional batteries are made of metals that are procured through mining activities resulting in air and soil contamination. Plus, when these batteries are not carefully disposed of, they release toxic chemicals into our environment.


Agile vs. waterfall: Comparing project management cultures

Waterfall and agile culture are different forms of managing software projects, but they are made of the same constituent concept: people managing people. The values we covered, on the other hand, are not interchangeable. They are different in kind, they are indeed the quintessential difference between agile and waterfall. Following the scrum guide by the book, having squads, agile coaches, dailies, and meetups might make you show up as agile, but unless your values are aligned with the Manifesto, you’re just dressing waterfall as agility. This is precisely the scenario we have been witnessing in the last few years. As more and more companies see the results of strong agile culture creating unicorns and industry juggernauts, more of them want a quick way to execute digital transformation. What happens is that they start practicing agile, but keep the waterfall values of control with a lack of flexibility and hierarchy. Even worse, since the number of successfully transformed companies is way smaller than those who just pretend to have transitioned, more and more people have no experience with agile values, leading them to believe that doing agile with waterfall values is perfectly normal.


What Rust Brings to Frontend and Web Development

“Rust to WebAssembly is one of the most mature paths because there’s a lot of overlap between the communities,” Gardner told The New Stack. “A lot of people are interested in both Rust and WebAssembly at the same time.” It’s not an either “Rust or JavaScript” or even “WebAssembly or JavaScript” situation, he said. It’s possible to blend WebAssembly with JavaScript. “You’re going to see some people rewrite for WebAssembly, but you’re going to see some people take advantage of WebAssembly where appropriate, and then use JavaScript for connecting the various pieces under the hood, and maybe running portions of the application as necessary,” he said. ... Chris Siebenmann, a Unix systems administrator at the University of Toronto’s CS Labs, has a theory about that: Languages spread when developers like using the language to accomplish things that matter to them. Right now, that language is Rust. “Rust is a wave of the future because a lot of people are fond of it and they are writing more and more things in Rust, and some of these things are things that matter to plenty of people,” Siebenmann wrote in 2021.


An Entity to DTO

According to Martin Fowler, DTO is: “An object that carries data between processes in order to reduce the number of method calls. When you're working with a remote interface, such as Remote Facade, each call to it is expensive. As a result, you need to reduce the number of calls. The solution is to create a Data Transfer Object that can hold all the data for the call.” So, initially, DTOs were intended to be used as a container for remote calls. In a perfect world, DTOs should not have any logic inside and be immutable. We use them only as state holders. Nowadays, many developers create DTOs to transfer data between application layers, even for in-app method calls. If we use JPA as a persistence layer, we can read an opinion that it is a bad practice to use entities in the business logic, and all entities should be immediately replaced by DTOs. We recently introduced DTO support in the JPA Buddy plugin. The plugin can create DTOs based on JPA entities produced by data access layer classes and vice versa – create entities based on POJOs. This allowed us to look at DTOs closer and see how we can use them on different occasions.


Blockchain & Internet Of Things Are A Perfect Match

It won’t all be plain sailing if we’re to migrate IoT workloads to a blockchain-based infrastructure. There are some key issues that need to be overcome, but luckily a number of interesting solutions are already being built. One of the main challenges with blockchain is that it’s not a low-latency protocol. As such, most blockchains process a very low number of transactions per second, and that presents issues for large-scale IoT device networks, as these require extremely rapid rates of data transfer to keep up. Ethereum, the world’s most popular smart contract blockchain, is only capable of processing around seven transactions per second, for example. Moreover, the Ethereum network is often congested, leading to high transaction costs. In its natural state, it’s not a realistic platform for large-scale IoT deployments. The answer to this problem may lie in scaling solutions like Boba Network, which is a Layer-2 network and hybrid compute platform that powers lightning fast transactions with much lower costs than traditional Layer-1 networks. Boba Network relies on a technology called optimistic rollups, which enable multiple transactions to be bundled into one and processed simultaneously. 


Getting data loss prevention right

DLP is not a plug-and-play solution. There is considerable prep work that must take place before anything is deployed. Reliable processes must exist for identifying data, performing continuous inspections, and verifying results. There must be a clear framework that identifies how data is classified, what gets blocked, and who is responsible for ultimately setting policies. Historically, many DLPs have relied on data access pattern recognition (REGEX), which offers mediocre insights into how data is used. In other words, even with the right people at the helm, the tools may be lackluster. DLP’s middling capabilities, often wielded by untrained IT departments, have given it a reputation for over-promising and under-delivering. Without a strong ability to apply context to data, many DLPs are glorified string-matching tools that overwhelm analysts with false positives. ... Much of DLP’s shortcomings are attributable to untrained staff or poor implementations. Some DLPs are built upon frameworks with functional limitations that may negatively impact their effectiveness. 


Ransomware ecosystem becoming more diverse for 2023

The ransomware ecosystem has changed significantly in 2022, with attackers shifting from large groups that dominated the landscape toward smaller ransomware-as-a-service (RaaS) operations in search of more flexibility and drawing less attention from law enforcement. This democratization of ransomware is bad news for organizations because it also brought in a diversification of tactics, techniques, and procedures (TTPs), more indicators of compromise (IOCs) to track, and potentially more hurdles to jump through when trying to negotiate or pay ransoms. ...  "Fast forward to this year, when the ransomware scene seems as dynamic as ever, with various groups adapting to increased disruptive efforts by law enforcement and private industry, infighting and insider threats, and a competitive market that has developers and operators shifting their affiliation continuously in search of the most lucrative ransomware operation." ... This trend is likely to continue in 2023 with ransomware groups expected to come up with new extortion tactics to monetize attacks on victims where they're detected before deploying the final ransomware payload.


Driving Employee Retention and Performance Through Recruiting

When the job market reopened as the pandemic wound down, there simply weren’t enough workers to fill jobs. Recruiters and hiring managers were under a lot of pressure to fill roles and fill them fast. The Muse CEO and founder Kathryn Minshew explains it this way: With companies desperate to hire and HR pros stretched thin, recruiters may be going rogue and stretching the truth to fill roles. Or, they could say things they think are true, but they don’t have the full picture of the workplace experience. She advises companies to be honest about what it’s like to work there, including successes as well as areas for improvement. Interviews should be a two-way street, and you must give candidates enough time to ask questions about company culture. "When people feel like they have opted into a situation with eyes wide open," Minshew says, "they’re much more likely to accept the good and the bad, and to show up as engaged, productive, satisfied employees. Rather than fluffy mission statements, what if you were able to openly and transparently connect candidates to their personal purpose from their first connection to your employer brand?



Quote for the day:

"A lot of people have gone farther than they thought they could because someone else thought they could. " -- Zig Zigler