# MOOC_LinearAlgebra_Lesson05

0 (0 Likes / 0 Dislikes)

Hello, and welcome to Lesson Five
Of this Introduction to Linear Algebra
with Wolfram U.
The topic of this lesson
is matrix equations.
So, let's begin with
a brief overview of the lesson.
So, let's say you have a matrix,
like the one shown over here.
It's got two rows
and three columns.
And you have a vector <i>x</i>,
a column vector, {2, 5, 1}.
Then, in this lesson
You learn how to find the product
of <i>A</i> and <i>x</i>.
And we're just
going to call it <i>A x</i>
Although you might want to put
a dot in it if you like.
This product would be used to
define a matrix equation
Such as <i>A x</i> = <i>b</i>,
where <i>b</i> is a vector.
And what we've shown is that
Such a matrix equation
is equivalent to
First of all, a vector equation,
and secondly, a linear system.
And of course, matrices are
much easier to work with
So we will always work with
matrix equations wherever possible.
Let's begin by trying to define
the product of a matrix and a vector.
So, here you have a matrix,
capital <i>A</i>, with <i>n</i> columns
<i>a</i>_1, <i>a</i>_2, up to <i>a</i>_<i>n</i>
And your vector with entries
<i>x</i>_1 up to <i>x</i>_<i>n</i>.
Then, the product of <i>A</i> and <i>x</i>
is defined by taking
<i>x</i>_1 • <i>a</i>_1
and then <i>x</i>_2 • <i>a</i>_2
and then <i>x</i>_<i>n</i> • <i>a</i>_<i>n</i>
So that's like multiplying a scalar
by a vector in each case.
Now, of course, to do that
You must make sure that
the number of columns of <i>A</i>
Is equal to
the number of entries in <i>x</i>.
You need the same <i>n</i>
over here and over there.
Then, of course, you'll notice that
the right-hand side over here
Is a linear combination
of the columns of <i>A</i>
With weights from <i>x</i>.
So, that's a linear combination.
Of course, one last thing
is that this matrix equation
<i>A x</i> = <i>b</i>
is then the same as
the vector equation
<i>x</i>_1 • <i>a</i>_1,
up to <i>x</i>_<i>n</i> • <i>a</i>_<i>n</i>, = <i>b</i>
OK, so as a simple example,
here is a matrix-vector product.
So you have a matrix over here,
your vector over there.
So what you do is you do
4 times the first column
3 times the second column,
and 7 times the third column
And you get back 3 and 6.
Let's check that
with actual calculations.
So here is the answer
from actual calculations.
We have 4 • {1, 0},
3 • {2, -5}, and 7 • {-1, 3}.
Now, the point is that we have
A very simple but very powerful
function in the Wolfram language
Namely Dot, which
will do exactly that for you.
So, here's the matrix,
here's the vector
and then if you do <i>A</i> . <i>x</i>
(That dot in the middle
is multiplication)
Then you get back
just the same answer.
Dot is by far
one of the most useful functions
In linear algebra.
If we look back over here,
you see that the first entry, 3
Really came from doing 4 • 1,
and then 3 • 2, and then 7 • -1.
Although, you're really doing
a row times the vector.
And that motivates the other
approach to multiplication
Which is the row-vector rule.
The idea is that if you want
an entry in <i>A</i> . <i>x</i>
You simply take
the <i>i</i>th row of <i>A</i>
And then you multiply it
by the vector <i>x</i>.
So again, here's an example.
You have a matrix,
you have a vector
And if you want
the first entry of <i>A x</i>
What you do is take
the first row of <i>A</i>
The {2, 3, -1}
And then kind of lift it
and place it on the first column.
So you get 2 • 4, 3 • 1
and -1 • 3, that's 8
And for the second entry
You do 3 • 4 and then 5 • 1
and then 2 • 3.
That's the other entry.
You can check that
with the Dot function
And you get back
just the same result, {8, 23}.
So the row-vector rule
Is a very nice way of doing
the multiplication in this case.
OK, so, a few properties
of these products.
If you have a matrix <i>A</i>
which is <i>m</i> by <i>n</i>
And you have two vectors
<i>u</i> and <i>v</i>
From <b>R</b>^<i>n</i>
Then for every scalar <i>c</i>,
for every number <i>c</i>
<i>A</i> . (<i>u</i> + <i>v</i>)
is (<i>A</i> . <i>u</i>) + (<i>A</i> . <i>v</i>)
Just like in
ordinary arithmetic algebra
And then <i>A</i> . (<i>c</i> <i>u</i>)
is <i>c</i> (<i>A</i> . <i>u</i>).
That says that the dot product
kind of commutes with
The addition
and scalar multiplication
Of vectors.
What it means is that if
Over here you do the plus first,
and then the dot
And over there
you do the dot first
And then you do the plus, etc.
So that’s the meaning of
'commutes'
You have two operations
And they can be
more or less interchanged.
The other thing is that,
in future
We'll see that a matrix
is a linear transformation
Because of
these very properties.
Let's verify these properties
in an example.
So here is a matrix <i>A</i>
And a pair of vectors
<i>u</i> and <i>v</i>.
We have got
<i>A</i>, <i>u</i> and <i>v</i>.
Let's check that if you do
<i>A</i> . (<i>u</i> + <i>v</i>)
And you do
(<i>A</i> . <i>u</i>) + (<i>A</i> . <i>v</i>)
You get the same answer,
and you can check that
With the double equals sign
you get back True.
Let's check the second property.
Suppose <i>c</i> is 5,
and you do <i>A</i> . (<i>c u</i>)
You get an answer.
You do <i>c</i> (<i>A</i> . <i>u</i>);
you get another answer
Which is the same
as the earlier one
And hence, the two
are exactly the same.
So, with that background
on matrices and their products
We can now talk about
the relationship
Between matrix equations
and linear systems.
The point is that
Every linear system can be
written as a matrix equation.
Now, the matrix <i>A</i> over here
is the coefficient matrix
For the left-hand sides
of the equations
Whereas the vector <i>b</i>
Is a vector of constants
on the right-hand side.
Let's take
a particular linear system
But I first need
to clear the variable <i>x</i>
I used it early on.
You know, the equation,
there are
Three equations
and three unknowns.
So, the matrix <i>A</i> now is
{2, 3, 7}, from
the first equation left-hand side
Then {5, -1, 11} from
the second equation
And {4, -5, 1} from
the third equation.
Whereas, the vector <i>b</i>
is {29, 36, -3}
From that, 29 and 36 and -3.
So, once you have <i>A</i> and <i>b</i>
You can then solve the equations
using LinearSolve.
And now, that's a solution
from LinearSolve
You can check it
By plugging it back
in the equations.
I've used the thread function
Which basically sets <i>x</i> to 1
and <i>y</i> to 2 and <i>z</i> to 3
And you see that you actually
get back True in all the cases.
So, you have a relationship
Between a linear system
and a matrix equation over here.
OK, once you understand that,
then we can go a bit deeper
And talk about when do solutions
for a mixed equation exist.
So, let's say
you fix <i>A</i> and you fix <i>b</i>.
Then this system,
or this equation
Has a solution if and only if
<i>b</i> is a linear combination
of the columns of <i>A</i>.
So let's check that.
Here is the matrix,
here is the vector <i>b</i>.
Write in the columns of <i>A</i>,
a1, a2, a3.
The first column is
4 and -2 and 2
The second column is 5, 1 and 4
The third column is 2, 0, and 5
Just like you can see over here.
Now, let's try
a particular linear combination.
I'll do 2 times the first column
3 times the second one
and 7 times the third one
And I get back an answer,
which is {37, -1, 51}
Which is exactly the same as <i>b</i>.
Now, the question is
how to define it
I'll tell you in just a minute,
but the main point
Is that this says that
you actually have
A solution {2, 3, 7}
for the equation.
Let's check the answer's correct
using LinearSolve.
You do get back 2, 3 and 7.
But the point is that
every time you have a solution
You really have
a linear combination
Which lets you express <i>b</i>
In terms of
the columns of <i>A</i>.
On the other hand
Suppose you fix your <i>A</i>,
and then you vary <i>b</i>.
Question, what happens then?
Well, the point is that you have
three statements over here.
First of all, suppose you know
that for all <i>b</i>
This equation has a solution.
Then, that's the same thing as
saying that
<i>b</i> is a linear combination
of the columns of <i>A</i>
Or that the columns of <i>A</i> span
the space from which <i>b</i> comes.
So that's a relationship
between <i>A</i> and <i>b</i>
But <i>A</i> is fixed
and <i>b</i> is varying.
So here's the matrix <i>A</i>,
for example
Here's an arbitrary vector
in two dimensions.
Now, if I solve the system
<i>A x</i> = <i>b</i> using LinearSolve
I get back a solution.
And let's check
it's actually correct.
So, you're trying to plug back.
You do the first entry
times the first column of <i>A</i>
The second entry
times the second column of <i>A</i>
Equal to <i>b</i>
And, oh,
that doesn't look right.
Well, I didn't
simplify far enough
So if I apply
the Simplify function
Then I actually get back True.
The point is it looked like we didn't get the answer
But in fact,
just a bit of simplification
Gives you
exactly the required result.
The main point is, in this case
we have had a solution found
for all <i>b</i>
not just for a particular <i>b</i>.
And the statements over here
Simply mean that
the columns of <i>A</i>
Span the plane <b>R</b>^2
Because every vector <i>b</i>
can be written as
A linear combination of
the columns of <i>A</i>.
So that brings me to
the end of this lesson
And the main point is that
One can compute the product
of a vector and a matrix
Either by using
linear combinations
Or by using
row-vector multiplication
And overall,
the row-vector multiplication
Is a bit easier to understand.
Second, we have
a very nice, powerful function
Called Dot
in the Wolfram Language
Which can be used to compute
such matrix-vector products
And that's the function we'll
often encounter in the course
So it'd be nice if
you can get familiar with it.
Now, the dot product
has got nice properties
Commutes with the addition
and scalar multiplication
Of vectors,
that's good to know.
Finally, every linear system can
be written as a matrix equation
And vice versa.
So, in the next lesson,
we'll talk about
The important concept
of linear independence
But before that,
do review this lesson.
It's an important and basic lesson.
And be ready for the discussion
of linear independence.
So stop here,
thank you very much.