Cognitive Code-Division Multiplexing and Generalized-polygon-based Coding
MetadataShow full item record
In the last decade, wireless communication service has experienced explosive growth while communication technologies have progressed generation by generation. Code-division multiplexing (CDM) or code-division multiple-access (CDMA) is seen as a promising basic technology for 3G/4G cellular communications or cognitive radio networks. The first part of this work is to investigate the problem of cognitive code-division multiplexing. We propose a cognitive code-division framework that allows secondary users to share the spectrum with a primary CDMA system. This framework provides flexibility to allocate transmitting power and real-valued code sequence of the secondary channel that maximize the signal-to-interference-plus-noise ratio (SINR) of the secondary user and in the meantime ensure the SINR requirements of all primary channels. This is shown to be a non-convex NP-hard optimization problem. A novel algorithm is provided that returns a desirable suboptimum solution by using semidefinite programming. To facilitate the implementation, we study the problem of designing binary signatures (spreading codes) in direct-sequence code-division multiple-access (DS-CDMA) systems. Our objective is to find the binary signature that maximizes the signal-to-interference-plus-noise (SINR) at the output of maximum-SINR (MSINR) linear filter. However, the maximization problem over the binary field is NP-hard with complexity exponential in the signature length. We present a low-complexity search algorithm that outputs the desirable binary solution with the deterministic SINR performance guarantee. Furthermore, we derive easy to calculate upper and lower bounds on the SINR of the optimal binary and quaternary sequences that can serve as a benchmark for any suggested suboptimal designs. Channel coding in wireless communications plays a key role because of uncertainty of channel fading and noise. The secondary part of this work is to investigate generalized-polygon-based (GP-based) coding. The performance of a GP-LDPC code under iterative decoding over binary erasure channels is determined by the stopping distance of the associated Tanner graph. Unfortunately, it has been proved that the problem of finding the stopping distance of a given LDPC code is NP-hard. It is well-known that the stopping distance is upper-bounded by the minimum distance. We derive new lower bounds on the stopping distance of LDPC codes generated from generalized polygons. Compressed sensing emerges as a promising technology to efficiently reduce the sampling rate for sparse signals. There exists a very close relationship between CS and error-correcting codes over large discrete alphabets. We propose new deterministic low-storage constructions of compressive sampling matrices based on classical finite-geometry generalized polygons. We also develop novel recovery algorithms for sparse signals under the noiseless and noisy environments.