Redhot1:
The great granddaddy of books to learn C from is:
"The C Programming Language", by Kernighan and Ritchie.
The thing to remember with this language is that it was designed as a 'portable assembler', a way to write programs that could express common hardware functionality, yet allow programs to be readily moved to new hardware. The concepts expressed in the language, things like pointers, or addresses of memory storage, and operators like ++ (increment) and -- (decrement) reflect hardware operations on most common processors.
Just like assembly language, it offers almost no protection against doing things that hardware doesn't support, like setting a pointer to an address that doesn't exist and trying to read the nonexistent contents, or overwriting memory in your program accidentally. The hardware and assembly language don't protect against these things, and neither does C.
C also does not offer all the nifty features of higher level languages, like strong type checking, memory bounds checks, or automatic memory management. C is 'down on the metal', with much of the functionality provided by additional libraries of code already written and debugged. Learning the standard libraries is the biggest task in learning the language.
C is very useful at low level hardware control and manipulation, which is why we see it in things like operating system kernels, where it's light weight 'down on the metal' functionality is very handy.
There are other popular languages in the C family that add functionality in different ways, at the expense of some flexibility and memory use. C++ was originally written as a set of libraries and a 'front end' that read C++ code and wrote out C code with references to a set of library routines that implemented much of the languages functionality. It has since gotten its own compilers that generate assembly language and object code (the stuff that runs on the hardware processor) directly. The language design took a variety of object-oriented programming concepts and tried to implement each one in C++ features. It features 'strong typing', a type of error check that catches many common programming errors, but also prevents some types of runtime dynamic binding.
Objective-C is a different object oriented model that preserves more C functionality, including C's ability to let you get into trouble with the hardware, while adding a Smalltalk-like object oriented runtime environment that features dynamic binding between modules and very flexible typing. It has recently had a syntax polish done on it, and had automatic memory management and a few other goodies added to become the Swift programming language.
These sorts of languages can in turn be used to build modern dynamic high level programming languages like Perl, Tcl, and Python. These languages are deliberately very much abstracted away from the hardware, and hide things like memory management and processor-specific features from the user. You'll find these languages behind the scenes of processor-independent applications like web-based tools, job control systems, and other fairly abstract (from the hardware) tasks.
The language you learn depends on what you want to do.
I have my biases, of course. Since I wrote lower level things like hardware read-only-memory code to boot processors, kernels for graphics processors, and window systems, much of my code was written in C. For more abstract things, I used more abstract tools, of course.
For a large extensible multimedia framework, I used Objective C for its dynamic runtime binding and protocol interface specifications. I used PostScript to build some UI elements running atop a window system that included a PostScript interpreter, while I wrote the code inside the window system in C.
(Made me look: I have 3.2 million lines of C code, just under a million lines of Objective C, C++, and PostScript code. No wonder I retired...)