View previous topic :: View next topic |
Author |
Message |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Wed Dec 18, 2013 9:12 am Post subject: Fast weighting functions |
|
|
Can anyone suggest me some CPU-efficient function W to interpolate between two values A and B so that the result C = W*A + (1-W)*B ? Example how it looks like is here. Ideally the slope and its position could be adjustable.
I'd just take part of sin, sin**2 and so on functions but it is much slower then anything constructed of multiplication, division and addition sine they are implemented in processor FPU hardware. Or may be such SSE functions exist? |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Wed Dec 18, 2013 12:00 pm Post subject: |
|
|
Dan,
The equation y = -2*x^2*(x-1.5) will produce a curve similar to what you have.
John |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Wed Dec 18, 2013 11:01 pm Post subject: |
|
|
Perfect, John, thanks! |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Thu Dec 19, 2013 2:03 am Post subject: |
|
|
Dan,
It is useful to compare this to the linear interpolation of a value between two known values, as an interpolation function that applies more weight to the closest point.
John |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Thu Dec 19, 2013 12:57 pm Post subject: |
|
|
That behavior is exactly what i need. Others are similar with sharper slopes. Or the peak of the slope closer to 0 or 1.
And also similar but in LOG scale. That gives hardest time to the processor. Investigating now different choices.
Curious if these weighting functions were realized as s processors extensions like SSE 1-4 since graphics imaging interpolation is one of key functions of GPUs. Would be great if processor designers made LOG/EXP and trigonometry in hardware again (like before) so that they will run faster (MULT and ADD are done in just one-two processor ticks ) or made specialized co-processors for scientific and engineering use |
|
Back to top |
|
 |
LitusSaxonicum
Joined: 23 Aug 2005 Posts: 2402 Location: Yateley, Hants, UK
|
Posted: Thu Dec 19, 2013 1:08 pm Post subject: |
|
|
Dan,
These interpolations lie at the heart of an engineering method called "The Finite Element Method" (FEM) - John Campbell uses it a lot. You will find lots of polynomial and expressions for these interpolation functions in books on the FEM, in 1, 2 and 3D. My particular favourite book is "The Finite Element Method" By O.C. Zienkiewicz and Taylor, (McGraw Hill), although as this is now a multi volume work and I don't have the latest edition I don't know which volume to look in! In the FEM, they are referred to a shape functions. The polynomial kind are sometimes named serendipity (chance discovery) functions, but there is a Lagrangian type that is sometimes used.
I have used FEM shape functions for other interpolations. The polynomial kind have simple derivatives useful for getting back from the value (y) to the coordinate x.
Eddie |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Fri Dec 20, 2013 12:22 am Post subject: |
|
|
Thanks Eddie. Damn, i should ask here earlier, the absence of fast interpolating routines specifically for the cases when one value is much larger then another dragged me for many years since i could not afford using slow functions |
|
Back to top |
|
 |
JohnCampbell
Joined: 16 Feb 2006 Posts: 2615 Location: Sydney
|
Posted: Fri Dec 20, 2013 12:23 am Post subject: |
|
|
Eddie,
The FE shape function approach is very useful, as it makes the definition of interpolation using higher order polynomials much easier to define. I think I posted a polynomial example a few years ago.
There are (at least) two main classes of functions used.
My understanding of Lagrangeian type is it degenerates to a linear interpolation for 2 points, which has no proximity "weighting", as Dan wants.
The other main function is SIN/COS functions which have this proximity weighting, like the function Dan is using. This is especially relevant for plate bending solutions that we learnt from Timoshenko, before FE programs took away the need to calculate by hand. SIN curves give a more exact solution than low order polynomials.
I am surprised by Dan's comment that SIN/COS are slow as I thought there were still hardware solutions for these functions.
It is interesting to understand that "weighted" solutions can give a worse solution, where a linear solution gives the exact solution.
Now, if the sample values are measures with error, you need to keep the functions very simple as the errors can explode with more sophisticated approaches.
I find this an interesting subject !
John |
|
Back to top |
|
 |
DanRRight
Joined: 10 Mar 2008 Posts: 2923 Location: South Pole, Antarctica
|
Posted: Fri Dec 20, 2013 2:18 am Post subject: |
|
|
Intel developers adopted RISC approach to FPU 15 years ago, kicking all math functions out of it besides basic four as i remember. The trigonometry goes as slow as LOG/EXP because is implemented same way i suspect via tables and series using those four functions. Yes those four go with ultimate speed of just one processor tick or so (using truly miracle way to do that as shift i suppose) but all other functions are done in software and still may suffer
Without precise assessment, you can just feel their slowness in this demo test where multiplication loop is 10 times larger then ones for trigonometry and exponents
Code: |
real*8 a
a = 1.234567
n = 100000000
print*, ' Start '
do i=1,n*10
a = a*a/a
enddo
print*, ' Mult/div done with 10x loop '
do i=1,n
a =asin(sin(a))
enddo
print*, ' Trigon done '
do i=1,n
a = log(exp(a))
enddo
print*, ' Log/exp done '
print*, ' All done. Hit Ctrl+C to Exit... '
read(*,*) a
end program
|
|
|
Back to top |
|
 |
|