nano robot

Technology review – Femto computing

Now that nanotechnology is well-established, well-applied, and well-funded.In the near future, there will come a time that we will come across femtotechnology.

As what Wikipedia would state: Femtotechnology is a “hypothetical term used in reference to structuring of matter on the scale of a femtometer, which is 10^−15m. This is a smaller scale in comparison to nanotechnology and picotechnology which refer to 10^−9m and 10^−12m respectively.”

Tl; dr;

Femtotechnology is still in the theoretical zone, with no realistic application in the present.

Imagine a future wherein the soles of your shoes will be made up with femto-enhanced rubber making it more resilient to elements. Femto-sized probes and chemicals may one day course through your blood to protect your immune system against deadly viruses and diseases, or enable your smartphone to be as thin and flexible or even integrate it into your body.

These things will be some of the most notable innovations and inventions that the forthcoming femtotechnology could provide for the whole humanity, thanks to the power of femto computing.

Femto computing: Measuring in a subatomic scale

illustration of lights

Source:  Ecstadelic

An Australian AI researcher named Hugo de Garis, wrote a few years ago in Humanity Plus Magazine on the great power that the future technology could bring: “If ever a femtotech comes into being, it will be a trillion trillion times more “performant” than nanotech, for the following obvious reason. In terms of component density, a femtoteched block of nucleons or quarks would be a million cubed times denser than a nanoteched block. Since the femtoteched components are a million times closer to each other than the nanoteched components, signals between them, traveling at the speed of light, would arrive a million times faster. The total performance per second of a unit volume of femtoteched matter would thus be a million times a million times a million = a trillion trillion = 1024.”

Through femtotechnology comes femto computing. It is measuring on the scale of subatomic particles which is three orders magnitude smaller than picotechnology and six orders of magnitude smaller than nanotechnology.

Femto is derived from the Danish word femten meaning “fifteen”, as in, a femtometeror a “fermi” being 10-15 the size of a regular meter.

Ben Goertzel, one of the world’s leading AI researchers, highlighted in his companion article in Humanity Plus Magazine: “What a wonderful example we have here of the potential for an only slightly superhuman AI to blast way past humanity in science and engineering. The human race seems on the verge of understanding particle physics well enough to analyze possible routes to femtotech. If a slightly superhuman AI, with a talent for physics, were to make a few small breakthroughs in computational physics, then it might (for instance) figure out how to make stable femto structures at Earth gravity… resulting in femto computing – and the slightly superhuman AI would then have a computational infrastructure capable of supporting massively superhuman AI. Can you say “singularity”? Of course, femtotech may be totally unnecessary in order for a Vingean singularity to occur (in fact I strongly suspect so). But be that as it may, it’s interesting to think about just how much practical technological innovation might ensue from a relatively minor improvement in our understanding of fundamental physics.”

Being a computer science professor back then and having been able to map computer science concepts to QCD (quantum chromodynamics) phenomena, Hugo de Garis writes, when computing at the femto level, the essential ingredients of (digital) computing are bits and logic gates.

“A bit is a two-state system (e.g., voltage or no voltage, a closed or open switch, etc.) that can be switched from one state to another. It is usual to represent one of these states as “1” and the other as “0,” i.e., as binary digits. A logic gate is a device that can take bits as input and use their states (their 0 or 1 values) to calculate its output.

The three most famous gates, are the NOT gate, the OR gate, and the AND gate. The NOT gate switches a 1 to a 0, and a 0 to a 1. An OR gate outputs a 1 if one or more of its two inputs is a 1, else outputs a 0. An AND gate outputs a 1 only if the first AND second inputs are both 1, else outputs a 0.

There is a famous theorem in theoretical computer science, that says that the set of 3 logic gates {NOT, OR, AND} are “computationally universal,” i.e., using them, you can build any Boolean logic gate to detect any Boolean expression (e.g. (~X & Y) OR (W & Z)).”

So if he can find one to one mapping between these 3 logic gates and phenomena in QCD, he can compute anything in QCD. He would have femtometer-scale computation.

You can read the essay in full here.

Summary

In general, the practical applications of femtotechnology are currently considered to be unlikely. It would take a long way for us to realize its usage in real life.

But there are already sources denoting about the possibilities of femto science in the areas of biology through using ultrashort pulse laser technology. With it, biologists can peer into known biological reactions at the shortest time than ever.

For more new and exciting ideas about it, check out Femtochemistry and Femtobiology: Ultrafast Events in Molecular Science by Monique M. Martin and James T. Hynes.

By Tuan Nguyen

quantum computer

Technology review – New Google Quantum Computer

Over the years, the wonders of quantum computers have stirred the curiosity and imagination of the gifted minds in the tech community.

Since then, countless prototypes of these supercomputers have surfaced in the tech market with the hopes of a brighter, progressive, and fast-engaging future.

Last week, a draft research paper revealed that Google have built a quantum computer solving a problem that would take supercomputers 10,000 years.

The leaked paper was erroneously uploaded on the website of NASA’s Ames Research Center in Moffett Field, California before it was pulled down.

According to the Financial Times, where the story first broke out, some of the researchers there are paper authors.Readers have downloaded the manuscript before it vanished, and has already been circulating online including on reddit.

John Martinis, the physicist who leads Google’s quantum computing effort in Santa Barbara, California, remained tight lipped over the said leakage. Others in the field think it is legitimate.

You can read the paper in full right on this link.

 

Tl; dr;

While our classic computers like laptops, smartphones, and even modern supercomputers claimed to be extraordinarily powerful, the capabilities of these devices will soon to be in question due to an internet leakage.

Google, one of the giant tech companies investing in the quantum computing expedition, is said to have just achieved Quantum Supremacy. It is a milestone where a quantum computer is able to perform a calculation that classical computers can’t practically do.

The paper points only one author – John Martinis.

He is a physicist who leads Google’s quantum computing effort in Santa Barbara, California.

Will Google’s Quantum Supremacy milestone strike a significant implication?

illustration of a computer

Source: Popular Science

While IBM, Microsoft, Intel and as well as numerous others are eyeing at how to advance the quantum computing technology, a new story shrouded with intrigue has surfaced earlier this month

The search giant, Google is said to have achieved Quantum Supremacy through an internet leakage.

As stated on the paper, physicists at Google were said to have used a quantum computer to perform a calculation that would overwhelm the world’s best conventional supercomputer – Summit.

It contains details of a quantum processor called Sycamore that contains 54 superconducting quantum bits, or qubits claiming to have achieved quantum supremacy.

“This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm,” the paper says.

The paper calculates the task would have taken Summit, the world’s best supercomputer, 10,000 years – but Sycamore did it in just 3 minutes and 20 seconds.

“It’s a great scientific achievement,” says physicist Chad Rigetti, the founder and CEO of Rigetti Computing in Berkeley and Fremont, California, which is also developing its own quantum computers.

While, Greg Kuperberg, a mathematician at the University of California, Davis, calls the advancement, “a big step toward kicking away any plausible argument that making a quantum computer is impossible.”

Google’s quantum computer a task called a random circuit sampling problem. With such problem, after a series of calculations each qubit outputs a 1 or 0. The aim is to calculate the probability of each possible outcome occurring.

Based on the tech giant’s findings, Sycamore was able to find the answer in just a few minutes – a task estimatedto take 10,000 years on the most powerful supercomputer, Summit.

Google’s quantum computer, called Sycamore, consisted of only 54 qubits – one of which didn’t work. For quantum computers to really come into their own, they will probably need thousands.

Though that is impressive, there is no practical use for it.

“We shouldn’t get too carried away with this,” says Ciarán Gilligan-Lee at University College London. This is an important step in the era of quantum computing, but there’s still a long way to go, he says.

Summary

This feat isn’t particularly useful beyond producing truly random numbers – it was a proof of concept. It isn’t free of errors within the processor.

We still expect to see bigger and better processors to be built and used to do more useful calculations.

In a statement, Jim Clarke at Intel Labs said:

“Google’s recent update on the achievement of quantum supremacy is a notable mile marker as we continue to advance the potential of quantum computing,”

As with the leakage story, Google appears to have partnered with NASA to help test its quantum computer. In 2018, the two organizations made an agreement to do this, so the news isn’t entirely unexpected.

By Tuan Nguyen

machine-brain merge

Technology review – Neuralink

Imagine a brain machine that will be interfaced with the human brain, making anyone —– SUPERHUMAN.

Now, it will all be possible.

Thanks to the company Neuralink, which has been making headlines lately about its controversial brain-chip interface, finally revealing their top secret project via YouTube live stream last July 17, 2019.

The neuro-technology company aims to develop an implant device that will enable humans to control compatible technology with their thoughts. It will initially treat serious brain diseases and brain damage caused by stroke in the short-term.

With the said technology it had been working on, Neuralink also aims a long-term development for human enhancement, linking artificial intelligence (AI) with the power of the human mind.

Neuralink or the Neuralink Corp. is a tech startup founded by the futurist entrepreneur Elon Musk (also the CEO and founder of Tesla and Space X) in July 2016 to create ultra-high bandwidth brain-machine interfaces to connect humans and computers.” In 2017, the company said its initial goal was to devise brain interfaces to alleviate the symptoms of chronic medical conditions.

Tl; dr;

Neuralink is a neuro-technology company that aims to build implants that connect human brains with computer interfaces via artificial intelligence, making its initial use with paraplegics to perform simple tasks using phones or computers having no physical movement.

To be clear, the tech startup have not yet performed any human trials (hopefully by the end of 2020) but so far, it already started testing its prototypes on rodents and even “a monkey has been able to control the computer with his brain,” as told by its main founder Musk.

The company has been super secretive about the nature of its work since its founding in 2016.

Merging humans with AI by Neuralink implant

An artist’s visualization of how a Neuralink’s brain/computer interface would look.

An artist’s visualization of how a Neuralink’s brain/computer interface would look.

Source: NYT

Philosophers, sci-fi enthusiasts, and daydreamers have long been dreaming of turbocharging their brainpower or read someone else’s mind.

But… that’s way too far to be realized for now.

Neuralink, a company led by multi-entrepreneur Elon Musk have just been unravelling the first few sheets of the so-called ambitious sci-fi inspired implantable brain-computer interfaces (BCIs). The neuro-technology company’s device prototype name was derived from a science-fiction concept called Neural Lace that is part of the fictional universe in Scottish author Iain M. Banks‘ series of novels The Culture.

As reported by The Verge, the neuro-technology company’s first public project aims to implant some sort of device in people who have been paralyzed that will allow them to control their phones and computers in a way that resembles telepathy. It involves flexible “threads” (between 4 to 6μm) thinner than human hair and are less likely to cause neurological damage than the brain-machine implants currently on the market.

These very small threads are injected deep within a person’s brain to detect and record the activity of neurons. The information gathered by these tiny threads or wires, would then be passed to an external unit that could transmit to a computer, where the data can be used, forming a brain-machine interface.

The system can include “as many as 3,072 electrodes per array distributed across 96 threads,” according to a white paper credited to “Elon Musk & Neuralink.” The company is also developing a neurosurgical robot capable of inserting six threads into the human brain per minute.

neuralink implants

Source: slashgear.com

Procedure

Each thread, as smaller than one tenth of the size of a human hair, contains 192 electrodes.

Each electrode group is encased inside a small implantable device that contains custom wireless chips, measuring four by four millimetres.

These threads would be precisely inserted into the brain by a tiny needle at the end of the robot (measures around 24 microns in diameter), targeting only specific areas of the brain and avoiding damaging any blood vessels.

Summary

In the future, Neuralink will aim to conduct operations that are safe and painless as having laser eye surgery.

As Neuralink president Max Hodak told The New York Times, currently the operation would require drilling holes into the patient’s skull to implant the threads.

But in the future, they hope to use a laser beam to pierce the skull with a series of tiny holes, which would not be felt by the patient.

By Tuan Nguyen

Technology review – WiFi 6

The next-generation wireless technology faster than 802.11ac (now known as Wi-Fi 5), Wi-Fi 6 or “802.11ax” or “AX WiFi”will be the newest version of the 802.11 standard for wireless network transmissions that we commonly call Wi-Fi.

It was developed by the Institute of Electrical and Electronics Engineers (IEEE) which is known to be the world’s largest association of technical professionals. The IEEE is basically the keeper of Wi-Fi, incorporated with committees responsible for developing it and establishing industry standards.

Wi-Fi 6 is originally built in response to the growing number of internet-connected devices, providing faster wireless speed, with increase deficiency, and reduced congestion in heavy bandwidth usage scenarios.

Wi-Fi 6 isn’t a new means of internet connectivity but rather an upgraded standard that compatible devices, particularly routers.

Meanwhile, some tech vendors are now introducing their Wi-Fi 6 compliant devices in the market. The Samsung’s Galaxy S10 was the first phone to support Wi-Fi 6, and the new iPhone 11 series from Apple. So expect to have labels identified with a WiFi signal indicator coupled with numeric representation of the connection in this case the number 6 on the packaging of some of the upcoming devices soon.

Note: The Wi-Fi Alliance has switched to version numbers away from the confusing and less memorable IEEE 802.11 standards.They won’t start offering Wi-Fi 6 certification for devices until the third quarter of 2019.

Wifi interface visual

Source: DIGNITED

Tl; dr;

Wi-Fi 6 is the next generation standard in Wi-Fi technology. It was previously known as 802.11ax until the Wi-Fi Alliance decided to rename it more succinctly. This latest Wi-Fi standard offers faster data transfer speeds having 40% higher compared to Wi-Fi 5, through more efficient data encoding. Thus, resulting in higher throughput.

Wi-Fi 6: The 6th Generation of Wi-Fi

 

wifi 6 specification

The Wi-Fi 6 router will connect more devices simultaneously

Source: TP-Link Technologies Co., Ltd.

Having today’s Wi-Fi 5 wireless standard, the new Wi-Fi 6 major upgrade is not just a change of routers but also a change of device and Wi-Fi connectivity in general.

Let’s see how this wonderful Wi-Fi standard upgrade could change our daily lives.

Faster Wi-Fi. 4 to 10 times faster than Wi-Fi 5, Wi-Fi 6 operates in dual-band frequency mode meaning, it handles a lot more bandwidth, thus, supporting faster data transfer speeds than its predecessor.

OFDMA. For increased efficiency, the Wi-Fi 6 standard supports OFDMA or the Orthogonal Frequency Division Multiple Access. It is a way of dividing a wireless channel into a large number of sub channels. Through these smaller channels, multiple devices can connect to the access point or router all at the same time without waiting for one device to finish first.

With OFDMA, each channel is chopped up into hundreds of smaller sub-channels, each with a different frequency. The signals are then turned orthogonally (at right angles) so they can be stacked on top of each other and de-multiplexed. With the bank analogy, imagine a teller being able to handle multiple customers when they are free. So customer one hands the teller a check and while that person is signing the check, the teller deals with the next customer, etc. The use of OFDMA means up to 30 clients can share each channel instead of having to take turns broadcasting and listening on each. ~ Network world

MU-MIMO. Also supporting the improved version of MU-MIMO, WiFi 6 antenna can transmit and receive from multiple other WiFi 6 devices concurrently. This allows a larger amount of data to be transferred at once and enables access points (APs) to handle larger numbers of devices simultaneously.

Backward compatible. Wi-Fi 6 is carefully designed to be maximally forward and backward compatible with 802.11a/g/n/ac. Meaning, if you use a Wi-Fi 6 router now, it can still support 802.11a/g/n/ac devices but of course you won’t get the full benefits of WiFi 6.

Target Wake Time (TWT). The access points of Wi-Fi 6 will be much smarter about scheduling whenever devices wake up and request information. This greatly helps devices to avoid interfering with each other, which, in turn, allowing them to spend more time in their battery-saving sleep modes.

Longer Battery Life. With TWT, your smartphones, laptops, and other Wi-Fi-enabled devices should have longer battery life, as well.

When the access point is talking to your smartphone, it can tell the device exactly when to put its Wi-Fi radio to sleep and exactly when to wake it up to receive the next transmission. This will be a great benefit for low-power “Internet of Things” devices that connect via Wi-Fi.

Summary

Whether you are at the crowded stadium, airport, hotel, office, apartment, or in the comforts of your home with everyone connected to Wi-Fi, the new Wi-Fi 6 technology (as addressed byIntel), will improve each user’s average speed by “at least 4 times” in congested areas with a lot of connected devices.

So no chances of getting slow Wi-Fi.

By Tuan Nguyen

The Great Wall of China

Technology review – What do we know about the Great Firewall?

When we say “firewall,” we are not talking about something that can be seen in our homes and computers, the Great Firewall is all about China’s great barrier of strict censorship over the Internet – even offline as well.

Known as the world’s most sophisticated censorship and surveillance system, the Great Firewall of China stops its people from connecting to websites or services: blocking website names, misdirects traffic and can even shut down encrypted communications by figuring out what service does the user is trying to connect.

Imagine accessing an internet with no Google, Facebook, Twitter, YouTube, the New York Times or any foreign website.

Local sources, whistle blowers or just simple emails are increasingly becoming inaccessible for foreign journalists to cover news updates from inside China. According to them, their sources were routinely intimidated, leaving them empty handed of information because only few are willing to speak to them.

Yaqiu Wang, Northeast Asia correspondent with the Committee to Protect Journalists said, “A quarter of foreign journalists surveyed reported that their sources were harassed, detained, questioned or punished at least once for speaking to them.”

In an article, Tibetan entrepreneur and education advocate Tashi Wangchuk who spoke to a New York Times journalist, has been thrown in jail under charge of inciting separatism.

Large-scale internet shutdowns by authorities are more becoming localized, tracing and monitoring content and individual phones.

As Hong Kong-based journalist James Griffiths describes in his book, The Great Firewall of China, “The key danger of the Great Firewall is that, by its very existence, it acts as a daily proof of concept for authoritarians and dictators the world over: proof that the internet can be regulated and brought to heel.”

Tl; dr;

Most countries like usually impose cyber censorship like banning websites that promote hate. China for its part takes monitoring and censoring to the extreme: a blocked account and a visit from the police. It has caught the attention of the International media, gaining negative uproar.

Having more than its 1.4 billion people online, the country’s Great Firewall argues that the restrictions are mostly about maintaining social order and safeguarding national security.

The Communist Party has largely succeeded in creating its version of the internet – digital propaganda, at its finest.

Great Firewall: The China’s War Over Internet

Source: InternetFreedom.org

A colloquial term for mainland China’s internet censorship system, the “Great Firewall” is part of the Golden Shield Project (also known as the National Public Security Work Informational Project). This censorship system has both legislative actions and enforcement technologies regulate the country’s internet.

The cyber wall blocks foreign websites, apps, social media, VPNs, emails, instant messages and other online resources that are deemed inappropriate or offensive by authorities. It ranges from pornography, violence, and more emphasis with politically-sensitive data (promoting democracy or any points that could put the Communist Party in a bad light).

In a survey, China outranked Iran for the world’s worst country when it comes to online freedom.

world worst online freedom

Source: bloomberg.com

Greg Walton, a D.Phil candidate at Centre for Doctoral Training in Cyber Security at the University of Oxford, who has monitored Chinese tactics for some time said, “Clearly, China can put in tens of thousands of times more resources into [monitoring and surveillance] than civil society actors can for internet freedom, even those supported by the U.S. government.”

A 2016 Harvard study estimated that the Chinese government fabricates and posts approximately 448m comments on social media annually by conducting manual deletion of posts, with an estimated 100,000 people employed by both the government and private companies to do just this.

China has been one of the several countries who did a complete ban of Wikipedia.

As reported from the Open Observatory of Network Interference on May 4 this year, the country is using DNS injections to prevent its citizens from accessing the online encyclopedia.

“In late April, the Wikimedia Foundation determined that Wikipedia was no longer accessible in China,” Samantha Lien, communications manager for the Wikipedia Foundation, said in an email Wednesday. “After closely analyzing our internal traffic reports, we can confirm that Wikipedia is currently blocked across all language versions.”

How does the Great Firewall work?

great firewall network diagram

Source: ThousandEyes

To ensure effective censorship and monitoring, the Chinese government has full control over every Internet Service Provider in the country, setting up different blocking methods that will stop users from accessing forbidden content.

If users enter an URL and the DNS server responds with a blocked IP address, the firewall will corrupt the connection details (known as DNS poisoning) to stop the page from loading properly. If the user tries to connect directly via the IP, the Great Firewall detects and blocks this. The firewall also scans URLs to ensure the page doesn’t contain sensitive keywords and blocks it if it does.

The firewall corrupts website URLs by disrupting the connection details to stop the page from loading properly. When a user tries to connect directly via the IP, the Great Firewall detects and blocks this. The Great Firewall also scans URLs to ensure the page doesn’t contain sensitive keywords and blocks it immediately if it does.

The Hidden “Cracks”

Have you observed that there are still Chinese users over Facebook or Twitter when you knew that the Great Firewall blocks those site?

They are using a VPN service to serve as a “hidden tunnel” that allows users from China to access sites that are other otherwise blocked by their government.

Performing this action is prohibited by law in China, because the Great Firewall is actively blocking the IP addresses of popular VPN services too.

Summary

China’s success of implementing restrictions on their citizens and global online companies has encouraged like-minded countries like Russia and India, clamping down the once-vibrant internet and increasing online regulation.

By Tuan Nguyen

iPhone 11 with different colours

Technology review – The improvements of iPhone 11?

Over the past few months, lots of rumors have been flying everywhere about Apple’s 2019 iPhones, iPhone 11.

Now, everyone’s anticipation has come to an end.

On Tuesday, the tech giant has finally revealed their iPhone 11 series at its annual September keynote event in Cupertino, California.

The three new smartphones announced were the 11 Pro (5.8-inch model), the 11 Pro Max (6.5-inch model), and simply the iPhone 11 (6.1-inch model).

Few of the most distinguished features are the display, the camera, the battery, a little extra water resistance, and a better charger for all.

Among these three, the iPhone 11 has been deemed as “just the right amount of everything,” featuring a new dual‑camera system, the A13 Bionic chip and “the highest‑quality video in a smart­phone”.

Tl; dr;

Apple’s iPhone 11 is their new flagship $699 smartphone, offering a range of powerful features at an affordable price. The new iPhone has a glass body that comes in six (6) different colors.

Though it doesn’t look much different than the iPhone XR, it’s made from the toughest glass ever in a smartphone and offers improved water resistance (IP68 rating) to boost overall durability.

iPhone 11: The next generation of iPhone

iPhone 11 colours

Source: MacRumors

Design

iPhone 11 design

Source: MacRumors

In terms of design, the tech company didn’t deviate too much from the formula it presented in the iPhone XS and XR.

Because iPhone 11’s body size is sandwiched between the new Pros:

  • iPhone 11: 75.7 x 150.9 x 8.3mm
  • iPhone 11 Pro: 71.4 x 144 x 8.1mm
  • iPhone 11 Pro Max: 77.8 x 158 x 8.1mm

All three phones share the same aesthetics, having a notched display and an all-glass back with a new square camera array. However, the 11 Pros feature stainless steel sides while the new iPhone has aluminium sides.

Water Resistance

iPhone 11 water resistance

Source: MacRumors

iPhone 11 is rated with the highest IP68 water resistance. Meaning you can dunk it in 2 meters of water (6.5 feet) for up to 30 minutes with no ill effects.

Colour

With a wide range of new colours, iPhone 11 model comes in six different colours namely white, black, yellow, red, purple, and green.The latter two colours are both new in 2019.

Display

iPhone 11 display

Source: MacRumors

The new iPhone features a 6.1-inch LCD display in which the Apple calls it the, “Liquid Retina HD Display.” It also features a 1792 x 828 resolution at 326ppi; a 1400:1 contrast ratio; 625 nits max brightness; True Tone support (white balance to ambient lighting adjustment), and wide colour support for true-to-life colours.

Front camera

iPhone 11 front camera

Source: MacRumors

The new iPhone together with iPhone Pro models have the same 12MP, f/2.2 aperture camera. Plus, these all have a brand new feature, “slofies,” which records your selfie vids at 120fps.

Rear camera

iPhone 11 rear camera

Source: MacRumors

Though the two iPhone 11 Pro models have new triple-cameras, iPhone’s dual camera lens have no difference in quality:

  • Camera 1: 12MP wide, f/1.8, OIS
  • Camera 2: 12MP ultra wide, f/2.4, 120-degree FOV

Performance

The new iPhone has an A13 Bionic 7-nanometer chip, the fastest chip ever in a smartphone as Apple say, with 20% faster CPU and GPU than the A12. Also, it comes along with an advanced Neural Engine (for real-time photo and video analysis), machine learning accelerators, and Core ML3 – an incredible chip enough to power a laptop, not a phone.

Sound

The iPhone 11 features spatial audio (for a more immersive sound experience), a new sound visualizer (3D sound model) and now comes with Dolby Atmos support (crisper, clearer, and more immersive on all of the new iPhones).

Biometrics

Since the launching of iPhone X, all of Apple’s latest iPhone models now have Face ID for 3D facial recognition. With iPhone 11, it still works the same but Apple says its 30% faster with improved performance and better recognition from longer distances.

Storage capacity

The new iPhone has storage tiers same as last year’s phones offering 64, 128, and 256GB capacities.

Battery life

Supporting up to 17 hours of video playback, 10 hours of streamed video playback, and 65 hours of audio playback, iPhone 11 lasts 1 hour longer than the iPhone XR.

Though fast charging is already available, this model still ships with a standard 5W power adaptor and fast charging requires extra equipment.

Price

There’s no doubt Apple might have been tempted to lower the costs with their new iPhone’s entry-level price due to last year’s disappointing sales of iPhone models.

Sold alongside iPhone 11 Pro and the 11 Pro Max beginning at prices $999 and $1,099, iPhone 11’s entry-level price has gone down to $699, a $50 drop from the iPhone XR’s cost of $749.

Comparing all three, most will surely buy the most affordable one rather than taking into consideration about the specs.

Summary

Though the new iPhone 11 Pro models have better screen and camera, the new iPhone alone is still worth the price. In addition, it still has the same chipset and offers same user experience to all Apple faithfuls.

By Tuan Nguyen

Open sign

Technology review – Open Source Licenses

Open-source licenses are licenses that allow software to be freely used, modified, and shared under defined terms and conditions.

These licenses allow users or organizations to adjust the program’s functionality to perform for their specific needs.

Though this term originated in the context of software development to designate a specific approach in creating computer programs, its meaning however evolved into something moralistic, ethical and sustainable.

Today, when we say “open source” we are following the open source way.

Open source projects, products, or initiatives embrace and celebrate principles of:

  • Open exchange;
  • Collaborative participation;
  • Rapid prototyping;
  • Transparency;
  • Meritocracy; and
  • Community-oriented development.

Tl; dr;

Open-source licenses are legal and binding contracts between the author and the user of a software component. It declares that the software can be used in commercial applications under specified conditions.

Open Source Licenses

The Open Source Compatibility Chart

Source: Duke Computer Science

Open source licenses are vital for setting out terms and conditions on which software may be used, modified, or distributed and by whom. They are also used to facilitate access to software as well as restrict it.

A license agreement is implemented so that potential users know which limitations owners may want to enforce, at the same time, owners will have strong background with their legal claims and having great control as to how their work will be used.

Originally, the open source movement started in 1983 when Richard Stallman, a programmer at the MIT Artificial Intelligence Laboratory at the time, said he would create a free alternative to the Unix operating system, then owned by AT&T; Stallman dubbed his alternative GNU, a recursive acronym for “GNU’s Not Unix.”

With Stallman’s vision, the idea of “free” software:

  • Ensures that users were free to use software as they saw fit;
  • Free to study its source code;
  • Free to modify it for their own purposes; and
  • Free to share it with others.

Then on, other programmers followed Stallman’s example. One of the most important was Linus Torvalds, the Finnish programmer who created the Linux operating system in 1991.

Open-source license Major Types

Open-source licenses come in two (2) major types:

Copyleft License.

A free software license that requires copyright authors to permit some of their work to be reproduced.

With copyright law, authors have complete control over their materials.

But with the power of copyleft license, users are permitted to copy and distribute copyrighted materials.

However, authors still have the verdict in who uses the materials based on their intended use.

This license does not require any source code distribution. That’s why users have rights similar to those normally only granted to the copyright authors, including activities such as distribution and copying.

Popular examples include GNU General Public License (GNU GPL), Common Development and Distribution License (CDDL), Mozilla Public License (MPL), Affero General Public License(AGPL) and Eclipse Public License (EPL).

*Note: Not all copyleft licenses are compatible.

To comply with this type of license, simply place your code in the public domain, uncopyrighted.

As a result, GPL-licensed code is not typically suitable for inclusion in proprietary software, although there are some like LGPL (Lesser GPL) licensed code which may be applicable.

Permissive License.

A free software license for a copyrighted code that can be freely (the freedom we think it is) used, modified and redistributed. Permissive-licensed code can be included in proprietary, derivative software.

Popular examples include MIT License, BSD Licenses, Apple Public Source License, Apache License and Microsoft Public License (Ms-PL).

*Note: Though permissive license is not as restrictive as copyleft license, it still requires compliance with a number of obligations (i.e. inclusion of the original license or copyright) before it can be redistributed.

Open-source license Minor Types

While, open source license has two (2) minor types:

Public Domain.

Code that can be used by the public without restriction or copyright. This type of license is intended to be truly free of costs and commonly has very few (if any) obligations with which you need to comply.

Popular examples include Creative Commons (CC-0).

*Note: Though public domain code should be the simplest license to deal with, PLEASE be aware that not all jurisdictions have a common definition of what public domain means. You should ALWAYS check local regulations.

Source Available.

An emerging type of license applied to code that cannot be deployed “as a service,” source available license is being defined in response to cloud providers.

Popular examples include Redis’ Source Available License (RSAL), MongoDB’s Server Side Public License (SSPL), the Cockroach Community License (CCL), or licenses that have had the Commons Clause added.

Summary

There are numerous open source licensing agreements a program or file may follow. So, better refer to the appropriate documentation to see what the original developer allows and prohibits.

By Tuan Nguyen

Supercomputer visualized

Technology review – What are supercomputers?

As we know, computers are digital and electronic programmable machines for processing, storing and displaying information.

The term “computer” once meant a person who does calculations but now, an automated machinery designed to respond to a specific set of instructions accordingly and can execute a prerecorded list of instructions.

They have made our lives easier and productive, giving way for a more economical growth and technological innovation.

Computers are used (then and now) in various sectors such as education, medicine, finance, transport, business and e-commerce, architecture, and many more.

Because of their essentiality, they have been pushed to the edge of speed and performance, aiming to do things ordinary to extraordinary, thus, paving the way to a more ambitious technological feat —— the creation of the so called, “SUPERCOMPUTERS…”

Tl; dr;

Supercomputers are extremely powerful computers that have enormous computing power, allowing them to work on thousands and thousands of calculations compared to ordinary computers. Also, they are ultimately suitable for testing and developing complicated algorithms or calculation formulas.

In the near future, supercomputers will be implemented to analyze air pollution, research renewable energy sources, and analyzing car sensor data and cameras on board cars, which can further develop autonomous cars.

Supercomputers: The most powerful

The Summit supercomputer

Historically, supercomputers are closely associated with Seymour Cray.

Recognized as the “Father of Supercomputing,” he designed the first officially designated supercomputers for Control Data in the late 1960s and his works remain to be the cornerstone that modern supercomputers still based upon, until now.

Having blazing speed, supercomputers are found in research facilities, government agencies and businesses performing mathematical calculations as well as collating, collecting, analyzing, and categorizing data.

Few of their uses include delving into the patterns of protein folding, uncovering the origins of the universe, understanding earthquakes, mapping the blood stream, modelling swine flu, forecasting the weather, etc.

Some statistics

TOP500.org also revealed that as of June 2019 as well, the most powerful supercomputer is Oak Ridge National Laboratory’s Summit, located in Oak Ridge, Tennessee, utilizing well over 2 million computer cores. Summit is aimed in the fields of energy, artificial intelligence, and human health research.

Supercomputer benchmark

Supercomputers consist of tens of thousands of processors that can perform billions and trillions of calculations per second. Thus, enabling them to have high-performance computing.

They employ a kind of computer processing so complex that it can’t be masters with just using customary all-purpose computers. These extraordinary devices have been traditionally used for scientific and engineering applications, handling very large databases or doing a great amount of computation or both.

The computing performance of a supercomputer is measured in FLOPS (Floating-point Operations PerSecond) instead of MIPS (Million Instructions Per Second).

The largest, most powerful supercomputers are multiple computers that perform parallel processing. There are two parallel processing approaches in general: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

In the world, there are now hundreds of supercomputers.

According to a business data platform Statista, that as of June 2019, 219 of the world’s 500 most powerful supercomputers were located in China, a figure which nearly doubled that of its nearest competitor, the United States, which accounted for an additional 116 supercomputers.Together, the two nations account for around two-thirds of the world’s most powerful supercomputers.”

Supercomputer possessed by countries

Characteristics

Supercomputers:

  • Can support more than a hundred users at a time;
  • Can handle massive amounts of calculations beyond the human capabilities;
  • Can be accessed by many individuals at the same time; and
  • Are the most expensive computers ever made.

Features

Supercomputers:

  • Have more than 1 CPU (Central Processing Unit);
  • Can support extremely high computation speed of CPUs;
  • Can operate on pairs of lists of numbers instead of pairs of numbers; and
  • Were initially used in applications related to national security, nuclear weapon design, and cryptography.

Uses

Supercomputers:

  • Are not used for everyday tasks due to their superiority;
  • Are needed by businesses to analyze data collected from their cash registers to help control inventory or spot market trends;
  • Help climatologists predict weather patterns of hurricanes and tornado strikes;
  • Help physicists simulate the formation of the first galaxy and the creation of stars from cosmic dust and gas;
  • Help geophysicists, predict how earthquake waves will travel both locally and globally;
  • Are useful for modelling the nervous system;
  • Help unravel the structure of swine flu;
  • Create better simulations of nuclear explosions away from real-world nuke testing;
  • Design renewable energy facilities and testing new materials for solar cells;
  • Develop highly complex encryption technologies and defense measures against cyber attacks;
  • Efficiently manage traffic infrastructures; and
  • Help researchers find better ways to combat auto-immune diseases, cancer or diabetes.

Summary

Supercomputers are increasingly becoming useful in major industries, tech companies, and countries around the globe.

Numerous research predict that by 2021 supercomputers are expected to be able to compute on an “exa” scale, meaning one quintillion computer operations per second.

By Tuan Nguyen

ZAO application

Technology review – ZAO

Just how far deepfake technology has become, a new app, which greatly resembles the said technology has recently emerged.

ZAO, a Chinese face-swapping app, has made huge explosion in social media and racked up millions of downloads online.

It allows its users to swap their faces with showbiz and sports personalities or anyone else in a video clip or GIF using artificial intelligence (AI).

The app was uploaded to China’s iOS App Store on August 30 and became the most downloaded free app on Sunday, as sources say.

According to a post from the app makers on China’s Twitter-like Weibo, ZAO’s servers nearly crashed due to the surge in traffic.

App Annie, a firm that tracks app downloads around the globe says, ZAOis the most-downloaded free app in China’s iOS App Store as of September 1.

ZAO is owned by NASDAQ-listed Momo Inc., the same company behind China’s version of Tinder, Tantan.

Tl; dr;

ZAO (Chinese name: 颜技-全民AI视频换脸做演员), is a new Chinese application that lets people swap faces realistically with prominent personalities in a series of videos and GIFs, and then share it with their friends.

They can sign-up for this app using their phone number and upload images of their face, using photographs taken with their smartphone.

The app isn’t currently available to anyone without a Chinese phone number, and isn’t listed on the UK or US App Store or Play Store.

ZAO: A Chinese “deepfake” face app

ZAO mobile app

Source: IOS APP STORE

With ZAO, anyone who aspires to be part of Titanic, Game of Thrones, or Big Bang Theory can now skip audition and go straight to limelight without the strenuous hardwork, talent, and dedication.

Aside from Chinese celebrities, famous faces of the app include Hollywood celebrities like Marilyn Monroe and Leonardo DiCaprio.

Allan Xia, a 30 year old artist and game developer based inAuckland, New Zealand, happens to have a Chinese number and made use of the app, uploading it on Twitter thereafter.

He became the face of the app last month after inserting himself into a Leonardo DiCaprio montage – taking him only seconds to find fame.

ZAO swap face

Privacy Issues

As it went viral, numerous issues had spark.

Some users complained that the app’s privacy policy could endanger them, making a huge backlash against this that pushed the company to change its terms.

One section of ZAO’s user agreement stated that consumers who’ll upload their images to the app agree to surrender the intellectual property rights of their face, permitting it to use their images for marketing purposes.

On Weibo, China’s version of Twitter, ZAO said that it would address those concerns:

“We thoroughly understand the anxiety people have towards privacy concerns,” the company said. “We have received the questions you have sent us. We will correct the areas we have not considered and require some time.”

The company also said that it won’t use head shots or videos uploaded by users except to improve the app and won’t store images if users delete them from their accounts.

User Agreement Revision

Meanwhile on CNN’s report, it was stated that ZAO has already changed its user agreement.

In its latest version of user agreement, ZAO “will try its best, based on the privacy terms, to use the content you have authorized us to use within a reasonable, necessary and expressly stated extent.”

“Your necessary authorization and agreement will not change your ownership of the intellectual property rights,” as expressed to the terms.

Also, the company promised in its statement not to store “facial biometric data” on its app and would delete any information about users “according to the law” if they erase their accounts.

In a statement, Momo Inc. said that the terms would also apply to any users who signed the app’s original terms and conditions.

“We protect personal data and value data safety,” ZAO added. “We’ve also adopted several safety measures including storage encryption.”

Domino effect

The issue had affected ZAO’s standing in App Store ratings with numerous company platforms expressing their thoughts against the deep fake app.

A popular Chinese online payment system, Alipay, expressed:

“Rest assured that no matter how sophisticated the current facial swapping technology is, it cannot deceive our payment apps,” Alipay said in a statement Sunday on its Weibo account. “Even if the extremely rare case that an account is stolen, insurance companies will cover lost funds in full.”

In its Tuesday statement, ZAO cited that the use of its system would not cause “payment risks,” adding that the “security threshold for facial recognition payment is extremely high.”

China’s leading messaging platform WeChat restricted access to the app, prohibiting invites and links citing “security risks” in its decision.

Such is the backlash in China over the ZAO app, that even state-controlled media is running the story.

Summary

With the issue reminiscent of FaceApp, ZAO (having a background with AI-based deepfake technology) has sparked numerous issues that could lead to a variety of problems including fake news, fake evidence, blackmail, defamation, etc.

As the advancement of deepfake, though fun and entertaining,producing audio and video clips are becoming more indistinguishable from authentic ones.

It’s a good thing these things existed, so that, we should be more mindful NOW in believing everything – distinguishing what’s right from wrong or real from fake.

By Tuan Nguyen

USB 4

Technology review – USB4

The USB Implementers Forum (USB-IF), the one responsible for creating Universal Serial Bus (USB) specifications, has just published the USB4 spec (previously styled as USB 4) this week.

Tl; dr;

Bringing better compatibility and speed, the new USB4 standard is going to utilize the smaller and reversible USB-C port instead of the standard USB-A port. It has the ability to deliver up to 40 gigabits per second of transfer speed and has better resource allocation in terms of display and data splitting.

What is it?

The new USB4 architecture is backward-compatible with USB 2 and USB 3, and uses the same USB Type-C connectors – the small plug used in all modern Android phones and by Thunderbolt 3.

It could be a unifying interface, eliminating bulky cables and oversized plugs and will provide throughput that satisfies everyone from laptop users to server administrators.

As it was first announced last March this year, one of the most important aspects of USB4 is that it merges USB with Thunderbolt 3, an Intel-designed interface that hasn’t really caught on outside of laptops despite its great potential, for display and data connectivity

Unfortunately, Thunderbolt 3 is listed as an option for USB4 devices, so some will have it and some won’t.

The published spec makes it clear: It’s up to USB4 device makers to support Thunderbolt.

In a lengthy statement, the USB Implementer Forum clarified the issue, making clear that they believe many PC vendors, at least, will support a joint USB/Thunderbolt implementation.

“Regarding USB4 specification’s optional support for Thunderbolt 3, USB-IF anticipates PC vendors to broadly support Thunderbolt 3 compatibility in their USB4 solutions given Thunderbolt 3 compatibility is now included in the USB4 specification and therefore royalty free for formal adopters,” the USB-IF said in a statement. “That said, Intel still maintains the Thunderbolt 3 branding/certification so consumers can look for the appropriate Thunderbolt 3 logo and brand name to ensure the USB4 product in question has the expected Thunderbolt 3 compatibility. Furthermore, the decision was made not to make Thunderbolt 3 compatibility a USB4 specification requirement as certain manufacturers (e.g. smartphone makers) likely won’t need to add the extra capabilities that come with Thunderbolt 3 compatibility when designing their USB4 products.”

USB4: Ending your cable nightmare

USB cables

Source: SILICON

As what USB-IF said, USB4 “is a connection-oriented, tunneling architecture designed to combine multiple protocols onto a single physical interface, so that the total speed and performance of the USB4 Fabric can be dynamically shared.” It “allows for USB data transfers to operate in parallel with other independent protocols specific to display, load/store and host-to-host interfaces. Additionally, USB4 extends performance beyond the 20 Gbps (Gen 2 x 2) of USB 3.2 to 40 Gbps (Gen 3 x 2) over the same dual-lane, dual-simplex architecture.”

Offering data transfer speeds and enhanced compatibility, USB4 (as what USB Implementers Forum has listed out) will have the following features:

  • Enhanced connectors and cables. Having a major foundation with USB Type-C, USB4 is smaller, reversible, and much easier to plug in.
  • Thunderbolt 3 compatibility. On previous USB standards, there was no compatibility with Thunderbolt 3 yet. In 2017, Intel donated the specs for Thunderbolt 3 to the USB Implementers Forum for third-party use. Having a Thunderbolt support allows for a peripheral, externally connected GPUs, as well as high-throughput, low-latency connections to more traditional peripherals such as storage and displays.
  • Speed. With twice the speed of the latest version USB 3.2 that maxes out at 20 Gbps, USB4 has the ability to deliver up to 40 gigabits per second of transfer speedBrad Saunders, USB Promoter Group CEO, stated that the latest version of the ubiquitous connector will offer three speeds: 10 Gbps, 20 Gbps, and 40 Gbps.
  • Supports USB PD. USB4’s greatest feature is the support for USB power delivery (USB PD), because not all USB-C devices support it. USB PD supports up to 100 watts of power delivery. The new USB4 will have its every device and host comply with USB PD standard which is great for your devices – faster charging and better battery life. Phones and laptops can charge up to 100W, while lower-power devices such as headsets can have a much lower trickle charge rate.
  • Better resource allocation. USB4 perfectly adjusts the amount of available resources for sending video and data over the same connection, having no chance of speed reduction.
  • Backward compatible with older devices. The newest version of USB can be used with USB 3 and USB 2 equipped devices and ports using dongles.

Summary

Definitely a big update, USB4’s specification will run the USB-C port, and run at 40Gbps, doubling the speed of the preceding USB 3.2 specification. Plus, it is Thunderbolt compatible.

“Multiple data and display protocols” is how the USB Promoter Group describes USB4’s capabilities.

Whether it is rapid speed or excellent bandwidth sharing, the new USB4 will be much useful to everyone.

By Tuan Nguyen