SUC logo
SUC logo

Knowledge Update

Introduction & Purpose
Knowledge update and Industry update at Skyline University College (SUC) is an online platform for communicating knowledge with SUC stakeholders, industry, and the outside world about the current trends of business development, technology, and social changes. The platform helps in branding SUC as a leading institution of updated knowledge base and in encouraging faculties, students, and others to create and contribute under different streams of domain and application. The platform also acts as a catalyst for learning and sharing knowledge in various areas.

App or website: What best protects your privacy?

New York, Sep 14 (IANS) The free apps and web-based services that you downloaded on Android or iOS mobile devices may have in turn leaked your personal information, including names, gender, phone numbers, and e-mail, a study has found.

Twitter launches app for Amazon's Alexa

​New York, Sep 10 (IANS) Now its time to be a bit more lazy as micro-blogging website Twitter has unveiled an app for Amazon's voice platform Alexa so she can just read your tweets on your Echo speaker or other Alexa-powered device.

Sharing smiling selfies can help you beat the blues

​New York, Sep 14 (IANS) Taking smiling selfies with your smartphone and sharing them with your friends can help make you a happier person, say computer scientists at the University of California, Irvine.

"This study shows that sometimes our gadgets can offer benefits to users," said senior author Gloria Mark, Professor of Informatics.

"Our research showed that practicing exercises that can promote happiness via smartphone picture taking and sharing can lead to increased positive feelings for those who engage in it," lead author Yu Chen, a post-doctoral scholar, added.

By conducting exercises via smartphone photo technology and gauging users' psychological and emotional states, the researchers found that the daily taking and sharing of certain types of images can positively affect people.

Chen and her colleagues designed and conducted a four-week study involving 41 college students.

The participants -- 28 female and 13 male -- were instructed to continue their normal day-to-day activities (going to class, doing schoolwork, meeting with friends, etc.) while taking part in the research.

Each was invited to the informatics lab for an informal interview and to fill out a general questionnaire and consent form. The scientists helped students load a survey app onto their phones to document their moods during the first "control" week of the study.

Participants used a different app to take photos and record their emotional states over the following three-week "intervention" phase.

The project involved three types of photos to help the researchers determine how smiling, reflecting and giving to others might impact users' moods.

The first was a selfie to be taken daily while smiling. The second was an image of something that made the photo taker happy. The third was a picture of something the photographer believed would bring happiness to another person (which was then sent to that person). Participants were randomly assigned to take photos of one type.

Researchers collected nearly 2,900 mood measurements during the study and found that subjects in all three groups experienced increased positive moods.

Some participants in the selfie group reported becoming more confident and comfortable with their smiling photos over time, said the study published in the journal Psychology of Well-Being.

The students taking photos of objects that made them happy became more reflective and appreciative.

And those who took photos to make others happy became calmer and said that the connection to their friends and family helped relieve stress.

Playing music during biopsy helps to reduce anxiety

New York, Sep 13 (IANS) Playing music during biopsy for breast cancer diagnosis and treatment helps patients to reduce pre-operative anxiety, a research has found.

The study published in the journal AORN provided insights into the impact of implementing a music therapy programme for surgical patients.

The paper is based on the effect of live and recorded music on the anxiety of 207 women undergoing a biopsy for breast cancer diagnosis and treatment and randomised patients into a control group (no music), a live music group, or a recorded music group. 

The researchers presented patients in the experimental groups with a live song performed by a music therapist at bedside or a recorded song played on an iPod through earphones.

Participants in both live and recorded-music groups experienced a significant reduction in pre-operative anxiety of 42.5 per cent and 41.2 per cent, respectively, when compared to the control group.

"During our two-year trial, we gained information on potential benefits, challenges and methods of facilitating a surgical music therapy program," said Jaclyn Bradley Palmer, Music Therapist at the University Hospitals Seidman Cancer Center, US.

The researcher said that a music therapist may be highly beneficial in the surgical setting, and music therapy may be a means of enhancing the quality of patient care in collaboration with perioperative nurses.

"As an interdisciplinary surgical staff member, the music therapist may help nurses achieve patient-related goals of anxiety reduction, pain management, effective education and satisfaction. And by having professional music therapists facilitate surgical music therapy programs, nursing workloads also may be reduced," Palmer added.

New technology may help read brain signals directly

New York, Sep 13 (IANS) Researchers have developed a new technology that can help read brain signals directly and may also aid people with movement disabilities to better communicate their thoughts and emotions.

The technology involves a multi-electrode array implanted in the brain to directly read signals from a region that ordinarily directs hand and arm movements used, for example, to move a computer mouse.

The algorithms translate those signals and help to make letter selections. 

"Our results demonstrate that this interface may have great promise for use in people as it enables a typing rate sufficient for a meaningful conversation," said Paul Nuyujukian, postdoctoral student at Stanford University in California, US.

In an experiment conducted with monkeys, the animals were able to transcribe passages from the national daily New York Times, and Hamlet, a tragedy drama written by William Shakespeare, at a rate of up to 12 words per minute.

Using these high-performing algorithms, the animals could type more than three times faster than with earlier approaches.

However, people using this system would likely type more slowly, the researchers said, while they think about what they want to communicate or how to spell words. 

People might also be in more distracting environments and in some cases could have additional impairments that slow the ultimate communication rate.

Despite that, even a rate lower than the 12 words per minute achieved by monkeys would be a significant advance for people who are not otherwise able to communicate effectively or reliably, Nuyujukian said.

Earlier versions of the technology have already been tested successfully in people with paralysis, but the typing was slow and imprecise. 

The latest work tests improvements to the speed and accuracy of the technology that interprets brain signals and drives the cursor, the researchers said.

"The interface we tested is exactly what a human would use. What we had never quantified before was the typing rate that could be achieved," Nuyujukian added. 

Other technologies for helping people with movement disorders involve tracking eye movements or tracking movements of individual muscles in the face. 

However, these have limitations, and can require a degree of muscle control that might be difficult for some people. 

While some approaches may not enable use of eye-tracking software due to drooping eyelids and others may be too tiring in some people.

Directly reading brain signals could overcome some of these challenges and provide a way for people to better communicate their thoughts and emotions, the researchers noted, in the paper published in the journal Proceedings of the IEEE.

Wearable tech converts body heat to electricity

New York, Sep 13 (IANS) Researchers have developed a new design for harvesting body heat and effectively converting it into electricity for use in wearable electronics.

The experimental prototypes are lightweight, conform to the shape of the body, and can generate far more electricity than previous lightweight heat harvesting technologies, the researchers said.

"Wearable thermoelectric generators (TEGs) generate electricity by making use of the temperature differential between your body and the ambient air," said corresponding author Daryoosh Vashaee, Associate Professor at North Carolina State University.

"Previous approaches either made use of heat sinks -- which are heavy, stiff and bulky -- or were able to generate only one microwatt or less of power per centimetre squared. Our technology generates up to 20 microwatt per cm2 and does not use a heat sink, making it lighter and much more comfortable," he added in a university statement.

The new design begins with a layer of thermally conductive material that rests on the skin and spreads out the heat. 

The conductive material is topped with a polymer layer that prevents the heat from dissipating through to the outside air. 

This forces the body heat to pass through a centrally-located thermoelectric generator. Heat that is not converted into electricity passes through thermoelectric generator into an outer layer of thermally conductive material, which rapidly dissipates the heat. 

The entire system is thin -- only two millimetres -- and flexible.

"In this prototype, the thermoelectric generator is only one centimetre squared, but we can easily make it larger, depending on a device's power needs," Vashaee said.

The researchers also found that the upper arm was the optimal location for heat harvesting.

While the skin temperature is higher around the wrist, the irregular contour of the wrist limited the surface area of contact between the thermoelectric generator band and the skin. 

Meanwhile, wearing the band on the chest limited air flow -- limiting heat dissipation -- since the chest is normally covered by a shirt.

Human sounds and languages are linked: Scientists

New York, Sep 14 (IANS) Scientists have in an analysis of nearly two-thirds of the world's languages found that humans tend to use the same sounds for common objects and ideas, no matter what language they speak.

The research, shattered the cornerstone concept in linguistics and demonstrated a robust statistical relationship between certain basic concepts -- from body parts to familial relationships and aspects of the natural world -- and the sounds humans around the world use to describe them, researchers said.

"These sound symbolic patterns show up again and again across the world, independent of the geographical dispersal of humans and independent of language lineage," said Professor and Cognitive scientist Morten H. Christiansen, of Cornell University in New York, US.

"There does seem to be something about the human condition that leads to these patterns. We do not know what it is, but we know it's there," Christiansen added.

For example, in most languages, the word for 'nose' is likely to include the sounds 'neh' or the 'oo' sound, as in 'ooze'; for 'tongue' an 'l' (as in "langue" in French). 

Similarly 'leaf' would include the sounds 'b', 'p' or 'l'; 'sand' uses the sound 's', also words for 'red' and 'round' would include the 'r' sound. 

"It doesn't mean all words have these sounds, but the relationship is much stronger than we'd expect by chance," Christiansen said.

The associations were particularly strong for words that described body parts. 

The team also found certain words are likely to avoid certain sounds. This was especially true for pronouns.



For example, words for 'I' are unlikely to include sounds involving u, p, b, t, s, r and l. 'You' is unlikely to include sounds involving u, o, p, t, d, q, s, r and l, the researchers observed.

For the study, an international team of physicists, linguists and computer scientists from Argentina, Germany, the Netherlands and Switzerland analysed 40-100 basic vocabulary words in 62 per cent of the world's more than 6,000 current languages and 85 per cent of its linguistic lineages.

The words included pronouns, body parts and properties (small, full), verbs that describe motion and nouns that describe natural phenomena (star, fish).

They found a considerable proportion of the 100 basic vocabulary words have a strong association with specific kinds of human speech sounds. 

"The results of the study are conservative; the actual number of sound symbolism patterns may in fact be even greater," Christiansen said.

The findings challenge one of the most basic concepts in linguistics: the century-old idea that the relationship between a sound of a word and its meaning is arbitrary.

The researchers do not know why humans tend to use the same sounds across languages to describe basic objects and ideas.



But these concepts are important in all languages, and children are likely to learn these words early in life, Christiansen noted in the paper published in the journal Proceedings of the National Academy of Sciences.

Memory loss not only indicant in Alzheimer's diagnosis

New York, Sep 14 (IANS) Researchers should not rely on the clinical symptoms of memory loss alone to diagnose Alzheimer's disease because there could be other indicants of the neurodegenerative disease that do not initially affect memory, says a new study.

There are more than just one symptom of Alzheimer's disease. These could be language problems, disruptive individual behaviour and personality disorder -- even judgement of someone's concept of the position of objects in space, said researchers at Northwestern University, in Evanston, of Illinois, in the US.

If it affects personality, it may cause lack of inhibition. For example, someone who was shy might one day go up to the grocery store clerk -- who is a complete stranger -- and try to hug or even kiss her. 

This all depends on what part of the brain is affected by Alzheimer's, the study said.

However, "these individuals are often overlooked in clinical trial designs and thus miss out on opportunities to participate in the experiments formulated to treat Alzheimer's", said lead author and Associate Professor Emily Rogalski at Northwestern University.

"Such individuals are often excluded because they don't show memory deficits, inspite of sharing the same disease (Alzheimer's) that's causing their symptoms," Rogalski added.

In the study, the authors identified the clinical features of individuals with primary progressive aphasia (PPA) -- a rare dementia that causes progressive declines in language abilities due to Alzheimer's disease. 

During the initial phase of PPA, memory and other thinking abilities are relatively intact. Also, PPA can be caused either by Alzheimer's disease or another neurodegenerative disease family called Frontotemporal lobar degeneration (FTLD). 

The study demonstrated that knowing an individual's clinical symptoms was not enough to determine whether PPA was due to Alzheimer's or any other neurodegenerative disease -- where progressive loss of structure or function of neurons, including death of neurons happen. 

Therefore, an amyloid positron emission tomography (PET) scan -- an imaging test -- should be taken.

PET scan tracks the presence of amyloid -- an abnormal protein whose accumulation in the brain is a hallmark of Alzheimer's. 

PET scan should be used in early life to determine the likelihood of Alzheimer's disease pathology in later life, the researchers said in the study published online in the journal Neurology. 

Gaia probe pins down precise positions of over a billion stars

London, Sep 14 (IANS) On its way to assembling the most detailed 3D map ever made of our Milky Way galaxy, European Space Agency's Gaia probe has pinned down the precise position in the sky and the brightness of 1,142 million stars, astronomers working on the mission have said.

The first catalogue of more than a billion stars from Gaia satellite was published on Wednesday -- the largest all-sky survey of celestial objects to date.

The release also features the distances and the motions across the sky for more than two million stars.

"Gaia is at the forefront of astrometry, charting the sky at precisions that have never been achieved before," said Alvaro Gimenez, ESA's Director of Science.

"Today's release gives us a first impression of the extraordinary data that await us and that will revolutionise our understanding of how stars are distributed and move across our Galaxy," Gimenez said in a statement on Wednesday.

Launched in 2013 on a Soyuz-STB/Fregat-MT launch vehicle from the European Spaceport in Kourou, French Guiana, Gaia started its scientific work in July 2014. 

This latest release is based on data collected during its first 14 months of scanning the sky, up to September 2015.

"The beautiful map we are publishing today shows the density of stars measured by Gaia across the entire sky, and confirms that it collected superb data during its first year of operations," Timo Prusti, Gaia project scientist at ESA, said.

Gaia probe was launched with the aim of making the largest, most precise three-dimensional map of our galaxy by surveying more than a thousand million stars.

At its heart, Gaia contains two optical telescopes that work with three science instruments to precisely determine the location of stars and their velocities, and to split their light into a spectrum for analysis.

During its planned five-year mission, the spacecraft spins slowly, sweeping the two telescopes across the entire celestial sphere. 

As the detectors repeatedly measure the position of each celestial object, they will detect any changes in the object's motion through space.

Toddlers using touchscreens develop better motor skills

London, Sep 14 (IANS) Has your toddler started playing around with the touchscreen tablet, yet? It's good, if the child has, as his/her active scrolling of the screen would increase finer motor control, says a recent study.

According to the study conducted at the University of London, early touchscreen use, in particular active scrolling, correlated with increased fine motor skills.

Researcher Tim J. Smith of Birbeck at University of London set up an online survey for parents to answer questions about their children's touchscreen use.

This included questions about whether the toddlers used touchscreens, when they first used one besides how often and how long they used them. 

The survey also included specific questions to assess the development of the children, such as the age that they first stacked blocks -- which indicates fine motor skills -- or the age they first used two-word sentences -- which indicates language development.

During the study, 715 families responded confirming that using touchscreen is extremely common in toddlers. 

"The study showed that majority of toddlers had daily exposure to touchscreen devices, increasing from 51.22 per cent at six to 11 months to 92.05 per cent at 19-36 months," Smith added. 

In toddlers aged 19-36 months, the researchers found that the age that parents reported for their child's first actively scrolling a touchscreen was positively associated with the age that they were first able to stack blocks, a measure of fine motor control.

The study, published in the journal Frontiers in Psychology, also stated that the current generation of toddlers was adapting rapidly to new technology.