My assignment was that I needed to write a function firstDeriv( ) using the two point formula [f(x+h)-f(x)]/h and the output of the function is an array of sample of calculated derivative of the input function. The array has (upperbound-lowerbound)/0.1 +1 points(given a 0.1 spacing between points)
I have the answer I just dont understand it so if someone can explain to me what is going on in the for loop.
this is my code:
I call this function in a separate script file:
M=firstDeriv(
@cos, -pi,pi)
heres the function code in a seperate script file:
function [ deriv ] = firstDeriv( input, lower_bound,upward_bound )
N=((upward_bound-lower_bound)/0.1)+1 %total sample numbers
h=0.1
for i=1:N-1 %index always starts from 1 not 0
deriv(i)=[input(lower_bound+(i-1)*h+h)-input(lower_bound+(i-1)*h)]/h;
end
end
my question is why does the "deriv(i)" equation have f(x)=input(lower_bound+(i-1)*h) ? why is it not just input(x+h)?
what is the thought process behind that equation im not comprehending?