Australian academics have created a mind-blowing concept that could serve as a proof-of-concept for the future in nanorobotics
DNA nanobots are nanometer-sized synthetic devices consisting of DNA and
proteins. They are self-sufficient because DNA is a self-assembling technology.
Not only does our natural DNA carry the code in which our biology is written,
but it also understands when to execute. Previous research on the subject of DNA
nanotechnology has shown that self-assembling devices capable of transferring
DNA code, much like its natural counterparts, can be created. However, the new
technology coming out of Australia is unlike anything we’ve encountered before.
These nanobots are capable of transferring information other than DNA. In
theory, they could transport every imaginable protein combination across a
biological system. To put it another way, we should ultimately be able to
instruct swarms of these nanobots to hunt down germs, viruses, and cancer cells
inside our bodies. Each swarm member would carry a unique protein, and when they
come across a harmful cell, they would assemble their proteins into a
configuration meant to kill the threat. It’d be like having a swarm of
superpowered killer robots swarming through your veins, hunting for monsters to
eliminate.
Indian CISOs voice concerns on CERT-In’s new cybersecurity directives
Fal Ghancha, CISO at DSP Mutual Fund, says that the majority of the time—more
than 70%—there are false-positive cybersecurity alerts of an incident. A
six-hour reporting mandate could lead to an overkill of reporting. Because the
timeline is very tight, people will become more aggressive and paranoid; they
will report the incident in a rush and make wrong decisions, he says. Ghancha
points out that the CERT-In directives have multiple granular actions, which
today many organisations don’t follow at length. “The entire ecosystem will have
to be integrated with a 24/7 monitoring system and skilled resource to ensure
all the reports are seen, analysed, and reported as per the new guidelines,”
Ghancha says. The extra work for security operations centers could be
significant, he says. "Let's say today an organisation is monitoring its crown
jewels only, which may be 20% of the total assets. Tomorrow, the organisation
will need to monitor additional assets, which will be 50% to 60% higher than the
current number.”
The Edison Ratio: What business and IT leaders get wrong about innovation
Good things come in threes. So, unfortunately, do not-so-good things, and that
includes leaders who invert the Edison Ratio. This third group of Edison Ratio
inverters is, if anything, the most dangerous — not because they’re malicious
but because they’re having fun. These are the “idea cluster bombers.” An idea
cluster bomber has brilliant ideas on a regular basis. Any one of their ideas is
so brilliant they’re bursting with it. And so they tell someone to drop
everything and go make it happen. Which is fine until the sun sets and rises
again. That’s when they have another brilliant idea, and tell someone else to
drop everything to make it happen. Brilliant! But not so brilliant that it can
withstand the impact of Edison Ratio Inversion. An example: Imagine someone has
a brilliant idea as they’re brewing coffee in preparation for starting off their
workday. They spend, oh, I dunno … let’s say they spend the morning fleshing it
out before Zooming a likely victim to work on it.
Minimum Viable Architecture in Practice: Creating a Home Insurance Chatbot
Our first step in creating an MVA is to make a basic set of choices about how
the chatbot will work, sufficient to implement the Minimum Viable Product
(MVP). In our example, the MVP has just the minimum features necessary to test
the hypothesis that the chatbot can achieve the product goals we have set for
it. If no one wants to use it, or if it will not meet their needs, we don’t
want to continue building it. Therefore, we intend to deploy the MVP to a
limited user base, with a simple menu-based interface, and we assume that the
latency delays that may be created by accessing external data sources to
gather data are acceptable to customers. As a result, we want to avoid
incorporating more requirements—both functional requirements and quality
attribute requirements (QARs)—than we need to validate our assumptions about
the problem we are trying to solve. This results in an initial design which is
shown below. If our MVP proves valuable, we will add capabilities to it and
incrementally build its architecture in later steps. An MVP is a useful
component of product development strategies, and unlike mere prototypes, an
MVP is not intended to be “thrown away.”
‘Decentralization Proves To Be an Illusion,’ BIS Says
It’s not surprising that a champion of central banks would dismiss the concept
of decentralization. But Chase Devens, research analyst at Messari, argues
centralization is largely responsible for the current mess, noting that it was
poor risk-management mixed with a lack of understanding of asset and protocol
functions — such as Terra and stETH — that left large centralized players such
as Celsius searching for liquidity. ... If DeFi lending wants to make it into
the real-world economy, BIS economists suggested it must engage in “large-scale
tokenisation of real-world assets” and rely less on crypto collateral. However,
“developing its ability to gather information about borrowers,” would eventually
lead the system to “gravitate towards greater centralization.” “The similarities
between DeFi and legacy intermediaries are increasing, which has two important
implications,” the report read. “The first is that elements of DeFi, mainly
smart contracts and composability, could find their way into traditional
finance. The second implication is that, once more, decentralization proves to
be an illusion.”
Businesses brace for quantum computing disruption by end of decade
While the EY report warns about companies potentially losing out to rivals on
the benefits of quantum computing, there are also dangers that organizations
should be preparing for now, as Intel warned about during its Intel Vision
conference last month. One of these is that quantum computers could be used to
break current cryptographic algorithms, meaning that the confidentiality of both
personal and enterprise data could be at risk. This is not a far-off threat, but
something that organizations need to consider right now, according to Sridhar
Iyengar, VP of Intel Labs and Director of Security and Privacy Research.
"Adversaries could be harvesting encrypted data right now, so that they can
decrypt it later when quantum computers are available. This could be sensitive
data, such as your social security number or health records, which are required
to be protected for a long period of time," Iyengar told us. Organizations may
want to address threats like this by taking steps such as evaluating
post-quantum cryptography algorithms and increasing the key sizes for current
crypto algorithms like AES.
Artificial intelligence has reached a threshold. And physics can help it break new ground
“Neural networks try to find patterns in the data, but sometimes the patterns
they find don’t obey the laws of physics, making the model it creates
unreliable,” said Jordan Malof, assistant research professor of electrical and
computer engineering at Duke. “By forcing the neural network to obey the laws of
physics, we prevented it from finding relationships that may fit the data but
aren’t actually true.” They did that by imposing upon the neural network a
physics called a Lorentz model. This is a set of equations that describe how the
intrinsic properties of a material resonate with an electromagnetic field. This,
however, was no easy feat to achieve. “When you make a neural network more
interpretable, which is in some sense what we’ve done here, it can be more
challenging to fine tune,” said Omar Khatib, a postdoctoral researcher working
in Padilla’s laboratory. “We definitely had a difficult time optimizing the
training to learn the patterns.”
Test Data Management Concept, Process And Strategy
Generally, test data is constructed based on the test cases to be executed. For
example in a System testing team, the end to end test scenario needs to be
identified based on which the test data is designed. This could involve one or
more applications to work. Say in a product which does workload management – it
involves the management controller application, the middleware applications, the
database applications all to function in co-relation with one another. The
required test data for the same could be scattered. A thorough analysis of all
the different kinds of data that may be required has to be made to ensure
effective management. ... This is generally an extension from the previous step
and enables to understand what the end-user or production scenario will be and
what data is required for the same. Use that data and compare that data with the
data that currently exists in the current test environment. Based on this new
data may need to be created or modified. ... Based on the testing requirement in
the current release cycle (where a release cycle can span over a long time), the
test data may need to be altered or created as stated in the above point.
Sigma rules explained: When and how to use them to log events
Sigma rules are textual signatures written in YAML that make it possible to
detect anomalies in your environment by monitoring log events that can be
signs of suspicious activity and cyber threats. Developed by threat intel
analysts Florian Roth and Thomas Patzke, Sigma is a generic signature format
for use in SIEM systems. A prime advantage of using a standardized format like
Sigma is that the rules are cross-platform and work across different security
information and event management (SIEM) products. As such, defenders can use a
“common language” to share detection rules with each other independent of
their security arsenal. These Sigma rules can then be converted by SIEM
products into their distinct, SIEM-specific language, while retaining the
logic conveyed by the Sigma rule. Whereas among analysts, YARA rules are more
commonly associated with identifying and classifying malware samples (files)
using indicators of compromise (IOCs), Sigma rules focus on detecting log
events that match the criteria outlined by the rule. Incident response
professionals, for example, can use Sigma rules to specify some detection
criteria. Any log entries matching this rule will trigger an alarm.
5 steps for writing architectural documentation in a code-focused culture
Don't let people lose faith in documentation by allowing it to become outdated
and inaccurate. I've found that the closer you keep the docs to the
implementation—including in the very same code repo, if applicable—the better
chance they will stay up to date. When docs reside with the code, both can be
updated in a single pull request instead of docs being an afterthought. Don't
be afraid to build docs from the code if it makes sense. In any case, review
the documentation periodically, prioritizing sections that document rapidly
changing components. Keep your experiment going and iterate on your
documentation as you would iterate on your architecture. Share your insights
with teams who want or need to bootstrap their docs. Can architects write docs
in your organization without feeling anxiety? Are they expected to? Do they
want to? Hopefully, you will begin to see movement on this spectrum. Finally,
remember that some of your teammates and leaders started out in that
code-focused culture.
Quote for the day:
"Leadership is a privilege to better
the lives of others. It is not an opportunity to satisfy personal greed." --
Mwai Kibaki
No comments:
Post a Comment