Things shift as the application is deployed and scaled. At that point, the fact that something is open source quickly becomes irrelevant. Instead, engineers — and business leaders — care about things like reliability and security. And they are willing to pay for it. If an open source project is geared primarily towards the “build” phase and is either less visible or less valuable at the deploy and scale phase, it will be hard to monetize, no matter how popular it is. Similarly, it’s always easier to monetize a project that provides a mission-critical capability, something that would directly impact users’ revenue if it didn’t work. A project that facilitates payments is going to be very easy to monetize, but a project that makes the fonts on a webpage particularly beautiful has an uphill battle. As an example, Fanelli pointed to Temporal, an open source microservice orchestration platform started by the creator of the Cadence project, which was developed by Uber to ensure that jobs don’t fail and was used for things like ensuring that payments are processed.
Avoid using fail-fast strategy on employees: Sometimes organizations hire buffer candidates and if they don’t meet the expectations, they are asked to leave. Instead, candidates must be assessed thoroughly and made sure they are given enough time to perform; Short notice period: 4-6 weeks of notice period is a good time for knowledge transition. Also, during notice period, many employees are non-productive as they spend time to complete HR formalities; Right references: If candidates work with a company for longer duration, they build good references. There is no point having references if candidates have spent less than one year in a company; Faster onboarding: Most of time the onboarding process is very long with many steps involved. It starts with HR onboarding, followed by practice/BU and project team. It is good to show a collaborative approach while onboarding candidates. But it is also important to have quick discussions with relevant teams
First, it makes the personal data of resident data principals vulnerable to foreign surveillance because arguably governments, in whose jurisdictions such servers are located, will have better access to the data. Second, storage and transference of personal data of resident data principals to jurisdictions with lax data protection laws also makes their data vulnerable. Third, it reduces the access of the domestic government of the data principals to this data thereby interfering with the discharge of their regulatory and law enforcement functions, including counter-terrorism and prevention of cyber attacks and cyber offences. This is because requests for such information are either denied citing law of the foreign country or its provisioning is often delayed given the inefficacious and time consuming MLAT (Mutual Legal Assistance Treaty) processes. Fourth, it leads to missed opportunities for the domestic industry that would otherwise be engaged in the provisioning of storage services in terms of foreign direct investment, creation of digital infrastructure and development of skilled personnel.
DARL is an attempt to drag experts systems into the 21st century. DARL was initially created as a solution to a problem that still exists today in Machine Learning: how do you audit a trained Neural network? I.e. if you use Machine Learning to create a model that you use in a real world example, how do you ensure it doesn't accidentally do something bad, like identify the wrong person as a potential terrorist, or deny a loan to a minority group? Neural networks and other similar techniques produce models that are "black boxes". The answer the designer of DARL found was to use Fuzzy Logic Rules as the model representation mechanism. Algorithms exist to perform Supervised, Unsupervised and Reinforcement learning to these rules. DARL grew out of that. Initially, the models were coded in XML, but later a fully fledged language was created so that all the usual tools like editors, interpreters, etc. could be used with the models. The rules are very easy to understand, so auditing them for unexpected effects is simple.
Jenkins was a CI tool at heart and later morphed into a CI/CD tool. Many people think that this fork in the road may have hurt the continued evolution of continuous delivery in the long term. But that is an argument for another DevOps.com article (or maybe even a panel discussion at an upcoming DevOps live event). Regardless of where you stand on that issue, as an open source project, it is hard to argue with the success of Jenkins. Driving a lot of that success is the Jenkins plug-in architecture. There are literally thousands of plugins that allow Jenkins to work with just about anything. That is the engine that powered Jenkins, yes; but its secret superpower was and is open source. That said, Jenkins has grown a bit long in the tooth over the years. It’s not that it doesn’t do what it always did, it’s that what we do and how we do it has changed. Microservices, Kubernetes and even cloud have changed the very fabric of the tapestry in front of which Jenkins sits. The open source community that supports Jenkins should receive enormous credit here: It has tried mightily to keep up with the many changes.
Shift left approaches begin to yield vague and general results with the developer writing the first line of code, and vulnerabilities can be caught as early as possible. On the other hand, shift right aligns with where vulnerabilities are detected closer to the full deployment of the software, sometimes only in production runtime. Shifting toward the right is usually the easier approach, as it provides results that are more accurate and actionable, enabling developers to run the code and then find the mistakes, but it isn’t always the desirable choice, as many times the detection is simply too late. That means the fixes are harder, costlier, and in worst-case scenarios, your organization could already have been exposed to any given vulnerability. On the other hand, shift left enables developers to see the security testing results as early as possible, saving both time and money for IT teams in the long run. The key to conquering this tension is fostering a painless testing methodology that can be envisioned as “one platform to rule them all.”
As with many things in technology, new disruptive ways of thinking are required to address the problem. There is a need to instill platforms, funding, policies and processes that diversify the talent pool in cybersecurity, opening it up to as wide a range of backgrounds as possible. Intelligence and law enforcement agencies are leading the way, keen to reclaim the edge from attackers. What started with the FBI grappling with whether to hire hackers who smoke cannabis in 2014 has turned into more formalized programs with open arms to diversity. Organizations such as GCHQ, the U.K.’s signals intelligence agency, are leading the way by actively hiring neurodiverse individuals for their unique ability to spot patterns in data. As with anything in cyber, what starts in intelligence agencies has a knack of achieving mainstream adoption with those defending large corporations. Those in cybersecurity need to recognize that diversity is about more than just equality. It is about optimizing defensive capabilities by having access to the widest possible range of problem-solving abilities.
The first order of business for CEOs is connecting the organization’s mission to the security of data, assets, and people. CEOs can do this by articulating an unambiguous foundational principle that establishes security and privacy as operational goals and business imperatives. Aflac, the largest provider of supplemental insurance at the workplace in the United States, has positioned cybersecurity at the center of who they are and what they do as a company. “We are one of the few insurance companies that measures ourselves on how fast we pay,” Aflac CISO Tim Callahan says. “Our operational managers are held to a standard of paying our claims fast. Dan Amos, our chairman and CEO, has never lost sight of who our customers are, and how much trust they have in us, and how we’re there for them during their time of need. That extends to protecting their information. He understands what the lack of cyber protection can do to our brand, to our customers, to our reputation. If the CEO were not passionate about that, then there’s a bigger problem.”
In the past, when organizations relied on their own private, often on-premises, data centers — and workers usually came to a physical office to do their jobs — security experts considered data and workloads to have a definable “perimeter” that needed to be defended. Bad actors, human or machine, were denied access to the network the way invaders were repelled from a castle: by building a (virtual) moat around it. Hence the use of authentication and authorization via individual logins and passwords. The architects who designed these systems assumed entities inside an organization could be trusted, and that users’ identities were not compromised. But that castle-and-moat approach is widely considered to be unreliable today. Not only is there no single “castle” to defend — but chances are, there’s already someone or something in your castle that shouldn’t be there. A Zero Trust approach makes the assumption that, as the horror movie tagline goes, the call is coming from inside the house. It assumes that someone or something that shouldn’t be there may already be on your network.
For cloud adoption to be successful, buy-in is required from the workforce and leadership. This is key to aligning tech investment and deployment with clear business goals, but a deep understanding of the strategic implication of cloud migration among C-suite and board members can sometimes be absent. Business leaders often believe it is the full responsibility of the CTO, but the discussion must go both ways, and therefore, there is a gap to be bridged between business and IT to ensure that both sides are on the same page.“It’s easy to forget that you need a case for change, and to overlook alignment of any staff member in charge of a team,” said Mould. “The leadership team also need to consider how they put the organisation across as an attractive place for talent to help them with the cloud migration. “The alternative is to outsource a capability that won’t be invested in internally, but a big part of this adoption is thinking differently about the brain drain, and look at creating an internal capability.”
Quote for the day:
"Leaders should influence others in such a way that it builds people up, encourages and edifies them so they can duplicate this attitude in others." -- Bob Goshen