Source: http://technologyforlearners.com/a-brief-history-of-education-educational-technology/
Author: Will Fastiggi

Prior to technology, word of mouth communication was the only type of education that existed.

Back then, when schools were available only to the aristocracy, the assumption would have been that leisure was synonymous with learning. In these times, speech was the primary means by which people learned and passed on learning, making accurate memorisation a critical skill.The first examples of educational technology in the ancient world were the tools that students and teachers used for writing. Then, by the time of Shakespeare’s Europe, in the 15th century, schools had more or less become the education…

Author: Pascal Zoleko

Sometimes, when we (Flexudy Education) are not busy creating services to improve the quality of education and research we write blogs about ideas that cross our minds. Today we will talk about CATs. Not about the cute ones, but about Competitive Analysis Trees.

source: https://www.slideshare.net/PitchDeckCoach/airbnb-first-pitch-deck-editable

To illustrate how CATs are used, let’s take a look at the classic AirB&B pitch deck first. If you are not familiar with the competition analysis, don’t worry, it’s super easy. If you look at the picture on the left, you can see that going down means expensive, while going up means affordable (or…


Authors: Zhigang Lu and Hong Shen, in arXiv:2001.01825v1,
January 2020

The summary below was automatically created by Flexudy. Feel free to download the app from your Google Play Store or Apple App Store.

Have fun reading

Differentially private trajectory publishing concerns publishing path information that is usable to the genuine users yet secure against adversaries to reconstruct the path with maximum background knowledge. The exiting studies all assume this knowledge to be all but one vertex on the path, i.e., the adversaries have the knowledge of the network topology and the path but one missing vertex (and its two connected…


Author: Björn Bebensee, in arXiv:1907.11908v1, July 2019

The summary below was automatically created by Flexudy. Feel free to download the app from your Google Play Store or Apple App Store.

Have fun reading

Local Differential Privacy (LDP) is a state-of-the-art approach which allows statistical computations while protecting each individual user’s privacy. Unlike Differential Privacy no trust in a central authority is necessary as noise is added to user inputs locally. There has been an ever-growing interest in collecting statistics from user data to improve products and to gain valuable insights. In the past decade an approach called Differential Privacy [13]…


Authors: Ninghui Li, Wahbeh Qardaji, Dong Su, in arXiv:1101.2604v2, June 2011

The summary below was automatically created by Flexudy. Feel free to download the app from your Google Play Store or Apple App Store.

Have fun reading

The main result of the paper is that k-anonymization, when done “safely”, and when preceded with a random sampling step, satisfies (ǫ, δ)-differential privacy with reasonable parameters. We consider the scenario where a trusted curator obtains a dataset by gathering private information from a large number of respondents, and then make usage of the dataset while protecting the privacy of respondents. The curator…


Authors: Rajarshi Das1 RAJARSHI, Ameya Godbole, Shehzaad Dhuliawala, Manzil Zaheer, Andrew McCallum

The summary below was automatically created by Flexudy. Feel free to download the app from your play store.

Have fun reading.

Our non-parametric approach derives crisp logical rules for each query by finding multiple graph path patterns that connect similar source entities through the given relation. We also demonstrate that our model is robust in low data settings, outperforming recently proposed meta-learning approaches1. Introduction Given a new problem, humans possess the innate ability to ‘retrieve’ and ‘adapt’ solutions to similar problems from the past.

For example, an automobile…


Authors: Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, in arXiv:1810.04805v2, May 2019

The summary below was automatically created by Flexudy. Feel free to download the app from your play store.

Have fun reading

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. BERT is conceptually simple and empirically powerful. There are two existing strategies for…


Authors: Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li, arXiv:1812.01187v2, December 2018

The summary below was automatically created by Flexudy. Feel free to download the app from your play store.

Have fun reading

In the literature, however, most refinements are either briefly mentioned as implementation details or only visible in source code. We will also demonstrate that improvement on image classification accuracy leads to better transfer learning performance in other application domains such as object detection and semantic segmentation. Introduction Since the introduction of AlexNet [15] in 2012, deep convolutional neural networks have become the dominating…


Authors: Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya, in arXiv2001.04451v2, February 2020

The summary below was automatically created by Flexudy. Feel free to download the app from your play store.

Have fun reading

The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences. The number of parameters exceeds 0.5B per layer in the largest configuration reported in (Shazeer et al., 2018) while the number of layers goes up to 64 in (Al-Rfou et al., 2018). Transformer models are also used on increasingly long sequences. Up to 11 thousand tokens…


Authors: Stephen Merity, in arXiv:1911.11423v2, November 2019

The summary below was automatically created by Flexudy. Feel free to download the app from your play store.

Have fun reading

The leading approaches in language modeling are all obsessed with TV shows of my youth — namely Transformers and Sesame Street. The author’s lone goal is to show that the entire field might have evolved a different direction if we had instead been obsessed with a slightly different acronym and slightly different result. We take a previously strong language model based only on boring LSTMs and get it to within a stone’s…

Flexudy Education

Our goal at flexudy education is to improve the quality of education and research with the help of trend technologies.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store