A reproduction of a proof by Leonard Euler, which I dug out of the back of a calculus textbook with the help of Brian Burrell. This was in MATH 233 at the University of Massachusetts, Amherst, Fall 2015.
LaTeX template for the CFM 2017 article submission
Instructions: Comments in the file below indicate where the
following steps have to be performed.
Step 1: Enter abstract title.
Step 2: Enter author information.
Step 3: Enter key words.
Step 4: Enter main text of abstract.
Step 5: Enter references, e.g. using a simple list.
Compressive Sensing is a Signal Processing technique, which gave a break through in 2004. The main idea of CS is, by exploiting the sparsity nature of the signal (in any domain), we can reconstruct the signal from very fewer samples than required by Shannon-Nyquist sampling theorem. Reconstructing a sparse signal from fewer samples is equivalent to solving a under-determined system with sparsity constraints. Least square solution to such a problem yield poor `results because sparse signals cannot be well approximated to a least norm solution. Instead we use l1 norm(convex) to solve this problem which is the best approximation to the exact solution given by l0 norm(non-convex). In this paper we plan to discuss three applications of CS in estimation theory. They are, CS based reliable Channel estimation assuming sparsity in the channel is known for TDS-OFDM systems. Indoor location estimation from received signal strength (RSS) where CS is used to reconstruct the radio map from RSS measurements. Identifying that subspace in which the signal of interest lies using ML estimation, assuming signal lies in a union of subspaces which is a standard sparsity assumption according to CS theory. Index terms : Compressive Sensing, Indoor positioning, fingerprinting, radio map, Maximum likelihood estimation, union of linear subspaces, subspace recovery.