Monday, April 30, 2007

DirectX transform with IDXSurface

I added the

dxtransguid.h
dxTmsftGuid.h

files to the C:\Program Files\ IE6_LIB\ Include directory



http://msdn2.microsoft.com/en-us/library/aa753550.aspx -


1.Understand the Inner workings of an WipeDlg application
2. How to load the raw buffer data to the input Surfaces ?...
3.Retrieve the Raw buffer from Output Surface ...


Use IDXARGBReadPtr
IDXARGBReadWritePtr interface


two ways to create the Image from the Raw Data... IDXSurfaceFactory interface...

1.CreateSurface() fn
2.CreateFromDDSurface() fn


Set of Processes involved in it :
--------------------------------------------
1.Create an offscreen surface
2.Create the HBITMAP from Raw buffer
3.Get the DC of an Offscreen surface
4.Select the raw buffer's HBITMAP to the offscreen surface DC.




DirectX Transform filters :
-----------------------------------------
DES(Directshow Editing Services) uses the DirectX transform filters.


Advantages of DirectX transform filters :
--------------------------------------------------------------

1.DES uses DirectX transform filters

2.it supports transform in HTML also
3.this also enables to use graphics processor to accelearate some of the transforms
4.mostly PC games use graphics processor hardware and most of the time is idle.
5.Graphics processor is equipped withn high performace processor and some of the effects can be done in realtime.
mostly PC games use that hardware.
6. it supports graphics processor from NVIDIA and ATI develope their own language for this.
7.graphics processor supports PC Games...
8. DirectX transform filter is used with DirectD raw so much more efficient than DirectShow baseclasses derived filters

Labels:

Transition Effects _DirectX Transform

Check the following CLSIDs with the WipeDlg Sample:


Look at the "dxtmsft.h" header file
Working well
-----------------------
effects CLASSI D are available in "dxmsft.h"
1. CLSID_DXTBarn - Completed
2. CLSID_DXTBlinds - Completed
3.CLSID_DXTCheckerBoard - completed
4.CLSID_DXTWipe - compeleted
5.CLSID_Pixelate - completed
6.CLSID_DXTGradientWipe - Completed
7.CLSID_DXTInset - completed
8.CLSID_DXTIris
9.CLSID_DXTRadialWipe
10.CLSID_DXTRandomBars
11.CLSID_DXTRandomDissolve
12.CLSID_DXTRevealTrans
13.CLSID_DXTSlide
14.CLSID_DXTSpiral
15.CLSID_DXTStretch
16.CLSID_DXTStrips
17.CLSID_DXTZigzag
18.CLSID_DXFade
19. CLSID_Wheel - Iadded the GUID from Registry

DXImageTransform.Microsoft.Blur


Some effects CLASS ID are available in "dxmsft3.h"

Some words and images are added to the following filters but they are working fine :
--------------------------------------------------------------------------------------------------------------------
1.CLSID_DXTMetaBurnFilm - Some words are added to the image ...word is "Meta creations..."
2.CLSID_DXTMetaCenterPeel
3.CLSID_DXTMetaColorFade
4.CLSID_DXTMetaFlowMotion
5.CLSID_DXTMetaGriddler
6.CLSID_DXTMetaGriddler2
7.CLSID_DXTMetaJaws
8.CLSID_DXTMetaLightWipe
9.CLSID_DXTMetaLiquid
10.CLSID_DXTMetaPageTurn
11.CLSID_DXTMetaPeelPiece
12.CLSID_DXTMetaPeelSmall
13.CLSID_DXTMetaPeelSplit
14.CLSID_DXTMetaRadialScaleWipe
15.CLSID_DXTMetaRipple
16.CLSID_DXTMetaRoll
17.CLSID_DXTMetaThreshold
18.CLSID_DXTMetaTwister
19.CLSID_DXTMetaVacuum
20.CLSID_DXTMetaWater
21.CLSID_DXTMetaWhiteOut
22.CLSID_DXTMetaWormHole



















Wipe - CLSID_DXTWipe
Pixelate - CLSID_Pixelate


Not working :
---------------------
1.CLSID_DXTCheckerBoardPP
2.CLSID_DXTChromaPP
3.CLSID_DXTComposite
4.CLSID_DXTConvolution
5.CLSID_DXTDropShadow
6.CLSID_DXTDropShadowPP
7.CLSID_DXTGlow
8.CLSID_DXTGlowPP
9.CLSID_DXTGradientD
10.CLSID_DXTLabel
11.CLSID_DXTLight
12.CLSID_DXTLightPP
13.CLSID_DXTMaskFilter
14.CLSID_DXTICMFilter
15.CLSID_DXTICMFilterPP
16.CLSID_DXTMatrix
17.CLSID_DXTMatrixPP
18.CLSID_DXTMotionBlur
19.CLSID_DXTMotionBlurPP
20.CLSID_DXTRandomBarsPP
21.CLSID_DXTRedirect
22.CLSID_DXTScale
23.CLSID_DXTShadow
24.CLSID_DXTShadowPP
25.CLSID_DXTStripsPP
26.CLSID_DXTWave
27.CLSID_DXTWavePP
28.CLSID_DXTWipePP



Motion Blur can be achieved by DirectX Transform's Motion blur which in turn doing the transform In place...


if the CLSID Ended with PP...

CLSID_DXTWipePP - DXT Wipe Filter's Property Page ( PP - Property Page).


we can also write our own filters ...

we can test it thru DXETool application...

Labels:

Thursday, April 26, 2007

Learnt things in 26_Apr_2007

1.Communication
2.Technology
Directshow
VC++




The solution to the problem is ...

1.we have to develope the CBaseOutputPin with two input pins ...
Look at the implementation of it and imitate it..

2.Use the Two different Pins for an input pin

So that each can read samples from Separate pin...




PushMode
Pull mode


Push mode works this way: the upstream filter's output pin
delivers samples to the downstream filter's input pin by
calling IMemInputPin::Receive() or
IMemInputPin::ReceiveMultiple() on it. (The CNetworkSend
class uses CBaseInputPin as base for its input pins and
CBaseInputPin implements IMemInputPin.)

Pull mode works this way: the downstream filter's input pin
reads samples from the upstream filter's output pin by
calling IAsyncReader::SyncRead(),
IAsyncReader::SyncReadAligned() or
IAsyncReader::Request()/IAsyncReader::WaitForNext() on it.


The filter that initiates a data transfer (that is, the
upstream one in push mode and the downstream one in pull
mode) must do it on a separate thread. This thread may be
spawned internally or by, respectively, a preceding or
following filter.


In your case, you would need a following filter to handle
the thread but since the network sender is a renderer, there
is no following filter, so you must spawn a thread by
yourself. This thread needs to repeatedly call some internal
method on the input pin for it to read data from the
upstream filter's output pin and multicast it.


Then you have another problem: the muxer delivers a PS while
the splitter delivers a video ES but the network sender
requires a TS. So your input pin should be modified to
either accept a PS or a video ES and internally remux it
into a TS.




The input pin will either process the data synchronously (that is, completely inside the Receive method), or
process it asynchronously on a worker thread I has to do this thing in MV Merge Frames ?(?????). The input pin is allowed to block within the Receive method,
if it needs to wait for resources.


For video overlay, As I said earlier we need new Output pin which can hold two IMemInput pins

whereas CBaseOutputPin holds only one input pin. the Logic of CTransformFilter will not be suitable to the two input pin requirement

becuase cTransformFilter uses only one input pin..moreover CBaseOutput pin is used to hold the input pin..




The existing documentation "Introduction to DirectShow
Filter Development" is a good place to start, with the attention to the
sections on how filters connect, and data flow for filter developers.



1.I installed the Visual Studio 2005...
2.NNSTestAgent only displays the disk speed as zero.
3.Learning Directx Transform filters




In 2005, If I tried to run the Service application, the client application has an error like

the client has "Invalid System Authorization"

So I changed the Service account as "Local System" then now it is working well..

Previously Service account kept as "Local Service"...


Next Add the "Configurator client coding to the WMCSApp application"...







For directshow, u may have to know about

1.DirectShow
2.VC++
3.CSharp Remoting
4.CSharp WMI classes
5.Sql Server
6.Message Queue
7.MTS
8.COM , DCOM and COM+


http://www.databasejournal.com/features/mssql/archives.php - Sql Server archive




http://sworks.com/keng/da/multimedia/dtrans/c++/ - we can have the code for the DirectX Transform samples

Labels:

Wednesday, April 25, 2007

Push mode and Pull mode...

PushMode
Pull mode


Push mode works this way: the upstream filter's output pin
delivers samples to the downstream filter's input pin by
calling IMemInputPin::Receive() or
IMemInputPin::ReceiveMultiple() on it. (The CNetworkSend
class uses CBaseInputPin as base for its input pins and
CBaseInputPin implements IMemInputPin.)

Pull mode works this way: the downstream filter's input pin
reads samples from the upstream filter's output pin by
calling IAsyncReader::SyncRead(),
IAsyncReader::SyncReadAligned() or
IAsyncReader::Request()/IAsyncReader::WaitForNext() on it.


The filter that initiates a data transfer (that is, the
upstream one in push mode and the downstream one in pull
mode) must do it on a separate thread. This thread may be
spawned internally or by, respectively, a preceding or
following filter.

Labels:

Get IMemInputPin from CBaseOutputPin and more errors

Today :
-----------
1. work with MergeFrames filter

Still I got the same error...


Allocator is not committed...

So I commented the code in Output Pin' s DecideAllocator() fn


within that I added only the code related to output pin... and removed all the code for the input pin...

Now it is working...




After displaying one image in the video window, the video window is blocked.

Expected Causes :
--------------------------
So Somewhere the thread is blocked...

1.So see the things related to thread and blocking ...

2. Identify the why the receive() fn is not called continuously...


Execution Order :
---------------------------
1.cvSyncInputPin::Receive() fn
2. VideoOverlay::Receive() fn
3. VideoOverlay::Transform() fn
4. cvSyncOutputPin::NonDelegatingAddRef() fn
5. cvSyncOutputPin::Deliver()


look at the COutputQueue class...

GetGUIDname() fn used to get the name of the GUID...




The following fn used to Display the GUID name.
char* GetGUIDName(const GUID &guid)
{
static char fourcc_buffer[20];
struct TGUID2NAME
{
char *szName;
GUID guid;
};
#define OUR_GUID_ENTRY(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8) \
{ #name, { l, w1, w2, { b1, b2, b3, b4, b5, b6, b7, b8 } } },
TGUID2NAME names[] =
{
#include
};

if(guid==GUID_NULL)
{
return "GUID_NULL";
}
for(int i=0;i {
if(names[i].guid==guid)
{
return names[i].szName;
}
}
//if we get here, the return value is only valid until the next call to this function
if(guid.Data2==0 && guid.Data3==0x10 && ((DWORD *)guid.Data4)[0]==0xaa000080 && ((DWORD *)guid.Data4)[1]==0x719b3800)
{
char tmp[sizeof(DWORD)+1];
memset(tmp,0,sizeof(DWORD)+1);
memcpy(tmp,&guid.Data1,sizeof(DWORD));
_snprintf(fourcc_buffer,20,"FOURCC '%s'",tmp);
return fourcc_buffer;
}
BYTE *Uuid=NULL;
static char uuidbuffer[50];
UuidToString(const_cast(&guid), &Uuid);
sprintf(uuidbuffer,"{%s}",Uuid);
RpcStringFree(&Uuid);
return uuidbuffer;
}




CBaseOutputPin internally stored only one input pin.

so if we have multiple input pins we have to use our own pin derived from CBasePin.


or we have to define the output pin's Deliver() fn as follows :

Normal Output pin ans its deliver :
-------------------------------------------------
OutputPin :: Deliver(IMediaSample * pSample)
{

return m_pOutput->m_pInputPin->Receive(pMediaSample);

}



Change it as follows :

OutputPin::Deliver(IMediaSample* pSample1, IMediaSample* pSample2 )
{
m_pOutput->m_pInputPin1->Receive(pSample1);
m_pOutput->m_pInputPin1->Receive(pSample2);
}



COuptutQueue has Thread and blocking mechanism...

we called its function Receive() in Output pin's Deliver() fn.

So the problem is in COutputQueue class.That is why the Directshow window is blocked and

that is the reason we got the Deliver() fn as the last debug string..

Look at the COutputQueue class implementation

or without calling Deliver() fn, we need to call the


m_pFilter->m_ip[0]->Receive(pMediaSample[0]);
m_pFilter->m_ip[1]->Receive(pMediaSample[1]);

This is not worked well So we need to look on another way...



Create separate Queue for both input pins and ...
work with it...




I got an error like this ...

GraphEdit Error like as follows :
"Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention."


How to Get the IPin* from CBaseInputPin :
----------------------------------------------------------

hr = m_pFilter->m_ip[0]->QueryInterface(IID_IMemInputPin,(void**)&m_pInputPin0);
if (FAILED(hr))
{
OutputDebugString("\n Error while Querying InputPin0 at cvSyncOutputPin::Active()");
return hr;
}
hr = m_pFilter->m_ip[1]->QueryInterface(IID_IMemInputPin,(void**)&m_pInputPin1);
if (FAILED(hr))
{
OutputDebugString("\n Error while Querying InputPin0 at cvSyncOutputPin::Active()");
return hr;
}





Multiplexer means multiple input and single output ...
Demultiplexer means one input and multiple output.

Labels:

This pin cannot use the supplied media type and Cannot allocate a sample when the allocator is not active.

---------------------------
GraphEdit
---------------------------
The graph could not change state.

This pin cannot use the supplied media type. (Return code: 0x8004022a)
---------------------------
OK
---------------------------


9.00 - 9.30 - I tried to change the media type using SetMediaType() fn well ... I got an enough output window in GraphEdit

I got an error as

---------------------------
GraphEdit
---------------------------
The graph could not change state.

This pin cannot use the supplied media type. (Return code: 0x8004022a)
---------------------------
OK
---------------------------

One thing I noted in SetMediaType() fn's MediaType that is

I had the width and height and size of the VIDEOINFOHEADER as correct as output pin.( 400,400)..

But rcTarget and rcSource rectangle remains like as input pin... (240,320)


9.30 - 10.30 - Even though I set the rcTarget and rcSource rectangle I got an error...

Still I am not able to get the IMediaSample of an Output pin...

the process of getting IMediaSample of an output pin is failed...



10.30 - 11.30 - I tried several approaches all are failed...

-Output Pin's NonDelegatingAddRef() and NonDelegatingRelease() continuously called

I commented the DisplayHistogram() fn for Output pin...




AM_SAMPLE2_PROPERTIES * const pProps = m_ip[0]->SampleProps();
DWORD dwFlags = 0;


pProps->dwSampleFlags - 273
AM_SAMPLE_SPLICEPOINT - 1

AM_SAMPLE_TIMEVALID - 16
AM_SAMPLE_STOPVALID - 256


if ( ! (pProps->dwSampleFlags & AM_SAMPLE_SPLICEPOINT ) )
{
//if the dwSampleFlags is Even number then only the condition will becomes true

}



GetBuffer() fn they tried the following :

HRESULT hr = m_op->m_pAllocator->GetBuffer(
&pOutSample
, pProps->dwSampleFlags & AM_SAMPLE_TIMEVALID ?
&pProps->tStart : NULL
, pProps->dwSampleFlags & AM_SAMPLE_STOPVALID ?
&pProps->tStop : NULL
, dwFlags
);

dwFlags - 0


Flags passed to the GetBuffer() fn...

#define AM_GBF_PREVFRAMESKIPPED 1
#define AM_GBF_NOTASYNCPOINT 2
#define AM_GBF_NOWAIT 4
#define AM_GBF_NODDSURFACELOCK 8


//For Displaying the DirectShow Error...

void ShowError(HRESULT hr)
{
if (FAILED(hr))
{
TCHAR szErr[MAX_ERROR_TEXT_LEN];
DWORD res = AMGetErrorText(hr, szErr, MAX_ERROR_TEXT_LEN);
if (res == 0)
{
wsprintf(szErr, "Unknown Error: 0x%2x", hr);
}

MessageBox(0, szErr, TEXT("Error!"), MB_OK | MB_ICONERROR);
}
}

//For displaying Error With ErrorCode :
void ShowError(HRESULT hr)
{
if (FAILED(hr))
{
TCHAR szErr[MAX_ERROR_TEXT_LEN];
TCHAR szNumber[20];
DWORD res = AMGetErrorText(hr, szErr, MAX_ERROR_TEXT_LEN);
if (res == 0)
{
wsprintf(szErr, "Unknown Error: 0x%2x", hr);
}
wsprintf(szNumber," :0x%2x",hr);
strcat(szErr,szNumber);
MessageBox(0, szErr, TEXT("Error!"), MB_OK | MB_ICONERROR);
}
}





I got an error while try to get the output media sample...

---------------------------
Error!
---------------------------
Cannot allocate a sample when the allocator is not active. :0x80040211
---------------------------
OK
---------------------------
VFW_E_NOT_COMMITTED - Allocator is decommitted.

This is returned by the output pin's allocator's GetBuffer() fn...


For Solving this problem ... I looked the documentation Before getting the output pin's media sample using GetBuffer() we have to call the

Commit() fn of an allocator. This will be Done in Active () fn of an output pin and after calling the GetBuffer() fn we have to call the Decommit()

fn to decommit the media samples this is called in Inactive() fn of an output pin...

CMyFilter is the class derived from CBaseFilter.

CMyFilter :: Pause()
{
outputPin->Active()
}

CMyFilter::Stop()
{
outputPin->Inactive();
}


HRESULT COutputPin :: Active()
{
return outputAllocator->commit();
}


After calling Pause() we have to call the GetBuffer() fn of an output allocator to get the IMediaSample of an output pin...

while closing we have to call stop() decommit the allocated samples.

HRESULT COutputPin:: InActive()
{
return OutputAllocator->Decommit();
}


After solving this problem I got the previous error like

"this pin cannot use the specified media type..."

I specified the invalid values in rcSource and rcTarget rectangles in the media type.

we have to set this media type to the output pin...After setting the media type, we have to delete the media type

using DeleteMediaType() fn.

Now the output window has invalid video data ... it is like flickering...


and displayed the error " This pin can not use the specified media type ".I am not able to fi x the problem...


Now the problem is how to set the media type for an output pin ?

GetMediaType should create a media type for the output based on the input
format (and your known transformation characteristics). SetMediaType should
set the selected format and its buffer size, and then DecideBufferSize
should set the allocator's buffer size based on the type passed to
SetMediaType. But remember that if you want to work with the video
renderer, you may get a dynamic type change on the first buffer, typically
to change the stride, and you need to use that type.


CTransformOutput pin class:
1.CheckMediaType()
2.SetMediaType()
3.GetMediaType( )...

CTransformFilter class
1.GetMediaType() fn


"This pin can not use the specified media type"
To avoid this error...

Get the sample size of an Output Pins media sample size...
and remove the setMediatype() fn...


I get the sample size using the

OutputSample->GetMediaType(pmt);
VIDEOINFOHEADER* pvi = (VIDEOINFOHEADER*)pmt->pbFormat;

long lSize;
lSize = pvi->bmiHeader.biSizeImage;

DeleteMediaType(pmt);


I am looking on Output Pin's media Type setting...

we can retrieve the Output pin's media sample's properties and its media type...

So I may try this approach to set the media type...

iMediaSample has properties AM_SAMPLE2_PROPERTIES like that...
we can set the Media Type to the pMediaType member of the AM_SAMPLE2_PROPERTIES obtained from Output pin's
media sample's mediatype.


I can change the media type in two ways..

1. Change the Mediatype using AM_SAMPLE2_PROPERTIES structure and Set it as a Output pin's media sample
media type

2.Another way is to imitate the GetMediaType() function of the CTransformFilter

( In my thought, This seems better option because The GetMediaType() executed only two or 3 times whereas

the first process will be executed for each Transform() fn )...


GetMediaType() is not getting executed.So I have to identify the reason.

Actually i didnt print the debug string in Outputpin's GetMediaType() fn..


GetMediaType() with EnumMediatypes() works well.. but the output window is same as input pin..

SetMediaType() fn with EnumMediaTypes() works well , but the output window can be different from input pin...

so the problem lies with the EnumMediaTypes() fn...


In the second approach I faced many problems and finally the Output pin's media type is not set properly.
So I got an error as follows :

---------------------------
GraphEdit
---------------------------
The graph could not change state.

This pin cannot use the supplied media type. (Return code: 0x8004022a)
---------------------------
OK
---------------------------


So change the media type in the first way...

Eventhough I changed the media type Still I got the same error...

But I didnt get an error anywhere in my program and output also some flickering due to invalid output pin's video size...




Look at the Working of CTransformOutputPin's GetMediaType() fn...

and check whether any EnumMediaTypes() function used in CTransformOutputPin class..

Labels:

Input pin size is different from Outputpin size

1. GetMediaType() fn

we returned the size of the output pin's media type

CheckMediaType() fn is also called before setting the media type

or SetMediaType() function we have to specify the output pin's width and Height

2.DecideBufferSize() fn

allocate the output pin's allocator buffer by specifying width and height






HRESULT OutputPin :: SetMediaType(const CMediaType* pmt) fn...

{
// I doubted whether it is possible to change the output pin;'s size with this method
// The reason is it is CMediaType is constant so they may not allow us to change the media type.

// It is true , we are in need to change only the output video size ... which resides in the pointer(pbFormat) so we can change it easily

VIDEOINFOHEADER* pvi = (VIDEOINFOHEADER*) pmt->pbFormat;
pvi ->bmiHeader.biWidth = 400;
pvi->bmiHeader.biHeight = 400;
pvi->bmiHeader.biSizeImage = pvi->bmiHeader.biWidth * pvi->bmiHeader.biHeight * (pvi->bmiHeader.biBitCount / 8);
}





I completed both the steps...

Next look on Deliver () fn...

we have to copy the output image to the Output pin's media sample as follows in

InitializeOutputSamples() in CTransformFilter class

Labels:

LOG for logging error in DebugString

This function is like a sprintf(), we can format the strings and display the error in

Debug String...


Usage :

_LOG(str,"Error code is : %2x",hr);


the above fn displays the error in Debug String as follows...
Example
Error code is : 0x80004566




void _LOG(LPCSTR szFormat, ...)
{
char szMessage[2048];
va_list Args;

va_start(Args, szFormat);
int result=_vsnprintf(szMessage,2048, szFormat, Args);
va_end(Args);
if(result==-1)
{
OutputDebugString("DebugString too long, truncated!!\n");
}
OutputDebugString(szMessage);
}

Labels:

Display the GUID name

The following fn used to Display the GUID name.


char* GetGUIDName(const GUID &guid)
{
static char fourcc_buffer[20];
struct TGUID2NAME
{
char *szName;
GUID guid;
};
#define OUR_GUID_ENTRY(name, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8) \
{ #name, { l, w1, w2, { b1, b2, b3, b4, b5, b6, b7, b8 } } },
TGUID2NAME names[] =
{
#include
};

if(guid==GUID_NULL)
{
return "GUID_NULL";
}
for(int i=0;i {
if(names[i].guid==guid)
{
return names[i].szName;
}
}
//if we get here, the return value is only valid until the next call to this function
if(guid.Data2==0 && guid.Data3==0x10 && ((DWORD *)guid.Data4)[0]==0xaa000080 && ((DWORD *)guid.Data4)[1]==0x719b3800)
{
char tmp[sizeof(DWORD)+1];
memset(tmp,0,sizeof(DWORD)+1);
memcpy(tmp,&guid.Data1,sizeof(DWORD));
_snprintf(fourcc_buffer,20,"FOURCC '%s'",tmp);
return fourcc_buffer;
}
BYTE *Uuid=NULL;
static char uuidbuffer[50];
UuidToString(const_cast(&guid), &Uuid);
sprintf(uuidbuffer,"{%s}",Uuid);
RpcStringFree(&Uuid);
return uuidbuffer;
}

Labels:

Displaying Error With Error code

//For displaying Error With ErrorCode :

void ShowError(HRESULT hr)
{
if (FAILED(hr))
{
TCHAR szErr[MAX_ERROR_TEXT_LEN];
TCHAR szNumber[20];
DWORD res = AMGetErrorText(hr, szErr, MAX_ERROR_TEXT_LEN);
if (res == 0)
{
wsprintf(szErr, "Unknown Error: 0x%2x", hr);
}
wsprintf(szNumber," :0x%2x",hr);
strcat(szErr,szNumber);
MessageBox(0, szErr, TEXT("Error!"), MB_OK | MB_ICONERROR);
}
}

Labels:

Thursday, April 19, 2007

CBaseOutputpin Error

Error 1:...


d:\Samples\SyncFilter_Backup\SyncFilter.h(213) : error C2695: 'cvSyncOutputPin::Notify': overriding virtual function differs from 'CBasePin::Notify' only by calling convention


class cvSyncOutputPin : public CBaseOutputPin
{

public :
HRESULT Notify(IBaseFilter * pSender, Quality q);
};


Solution :


class cvSyncOutputPin : public CBaseOutputPin
{

public :
STDMETHODIMP Notify(IBaseFilter * pSender, Quality q);
};




Error 2 :
---------------
d:\Samples\SyncFilter_Backup\SyncFilter.h(202) : error C2440: 'initializing' : cannot convert from 'CBaseFilter *'

to 'SyncFilter1 *'

code :
------------
cvSyncOutputPin(TCHAR *pObjectName,
CBaseFilter *pFilter,
CCritSec *pLock,
HRESULT* hr,
LPCWSTR pName
):CBaseOutputPin(pObjectName,
(CBaseFilter*)pFilter,
pLock,
hr,
pName), m_pFilter(pFilter)
{


};


Solution code :
-------------

cvSyncOutputPin(TCHAR *pObjectName,
CBaseFilter *pFilter,
CCritSec *pLock,
HRESULT* hr,
LPCWSTR pName
):CBaseOutputPin(pObjectName,
(CBaseFilter*)pFilter,
pLock,
hr,
pName), m_pFilter((SyncFilter1*)pFilter)
{


};



1.CheckMediaType() is an important fn in any filter it can change the result of an entire video




unsafe code error :
----------------------
CSHarp I got the folloing error :

Error :Build with unsafe code problem
Solution :

Go to Project -> Properties and then to the Build Tab. Ensure for all Configurations (on the top of this page) the Allow unsafe code check box is checked. If it is not checked for one of the configurations and Team Build is setup to build that configuration then your build will fail with this error.

Labels:

Windows Services application

Create a Normal C# windows Services application :
-----------------------------------------------------

After creating the application


add the installer to it...


To add installers to your service application :
-----------------------------------------------

In Solution Explorer, access Design view for the service for which you want to add an installation component.
Click anywhere within the designer's surface.
In the Description area of the Properties window, click the Add Installer link.
A new class, ProjectInstaller, and two installation components, ServiceProcessInstaller and ServiceInstaller, are added to your project, and property values for the service are copied to the components.

Click the ServiceInstaller component and verify that the value of the ServiceName property is set to the same value as the ServiceName property on the service itself.
To determine how your service will be started, click the ServiceInstaller component and set the StartType property to the appropriate value.Value Result
Manual The service must be manually started after installation. For more information, see Starting Services.
Automatic The service will start by itself whenever the computer reboots.
Disabled The service cannot be started.

To determine the security context in which your service will run, click the ServiceProcessInstaller component and set the appropriate property values. For more information, see Specifying the Security Context for Services.
Override any methods for which you need to perform custom processing. For more information, see Overriding Default Methods on Installation Components.
Perform steps 1 through 6 for each additional service in your project.
Note For each additional service in your project, you must add an additional ServiceInstaller component to the project's ProjectInstaller class. The ServiceProcessInstaller component added in step three works with all of the individual service installers in the project.
Create your setup project and custom action to deploy and install your service. For more information on setup projects, see Setup Projects. For more information on custom actions, see Walkthrough: Creating a Custom Action.


After the installer is added to our windows services,

I selected the ServiceProcessInstaller1 control's properties...

I modified the "Account" property of the serviceProcessInstaller1 as "LocalService".


ServerInstaller1 control have the "name" property which holds the name of the service listed in "Administrative tools ".



//and then run the service from the


InstallUtil :
-----------------

In command prompt


installutil windowsServices1.exe //Install the services in Service Control manager
installutil -u windowsServices1.exe // Uninstall the services in Service control manager...










Starting the Service Manually :
----------------------------------

There are several ways you can start a service that has its StartType process set to Manual — from Server Explorer, from the Windows Services Control Manager, or from code. It is important to note that not all of these methods actually start the service in the context of the Services Control Manager; the Server Explorer and programmatic methods of starting the service actually manipulate the controller.

To manually start a service from Server Explorer

In Server Explorer, add the server you want if it is not already listed. For more information, see Accessing and Initializing Server Explorer.
Note The Servers node of the Server Explorer is not available in the Standard Edition of Visual Basic and Visual C# .NET. For more information, see Visual Basic Standard Edition Features or Visual C# Standard Edition Features.
Expand the Services node, and then locate the service you want to start.
Right-click the name of the service, and click Start.
To manually start a service from Services Control Manager

Open the Services Control Manager by doing one of the following:
In Windows 2000 Professional, right-click My Computer on the desktop, and then click Manage. In the dialog box that appears, expand the Services and Applications node.
- or -

In Windows 2000 Server, click Start, point to Programs, click Administrative Tools, and then click Services.
Note In Windows NT version 4.0, you can open this dialog box from Control Panel.
You should now see your service listed in the Services section of the window.

Select your service in the list, right-click it, and then click Start.



To See more about Windows Services :
--------------------------------------
www.c-sharpcorner.com
http://www.c-sharpcorner.com/UploadFile/prvn_131971/BirthdayWishScheduler02022006012557AM/BirthdayWishScheduler.aspx

Labels:

UML Basics

UML has 4 things
1.Specifications
2.Adornments -Notes..
3.Common Divisions
4.Extensibility mechanisms - stereotype,tagged values
and constraint

Stereotype - extension of the new building blocks
we can create a new graphical symbol.
Tagged values - Extensions of the properties in UML

constraint - extension of the semantics in UML



UML has 4 basic elements :

The UML provides a language for

1.structural things
2.behavioral things
3.grouping things, and
4.notational things.

Labels:

Windows Services Error

Windows Services Error :
---------------------------
---------------------------
Services
---------------------------
The Service1 service on Local Computer started and then stopped. Some services stop automatically if they have no work to do, for example, the Performance Logs and Alerts service.
---------------------------
OK
---------------------------

I got this error because I had the MessageBox.Show() to display message box

during Start() and Stop() of the Windows service...

Solution :

I removed the MessageBox from the windows Services...

Labels:

3 Tier architecture

The 3-Tier architecture has the following 3-tiers.
1. Presentation Tier
2. Application Tier/Logic Tier/Business Logic Tier
3. Data Tier



we can call it in different way as follows :

1.Presentation Logic
2.Application logic or Business Logic
3.Data Logi

cbClsExtra parameter in WNDCLASS structure

I have created a custom control with win32. It is a fill bar control.


I want to set the color of the fill color at design time. I put the
color value in the WNDCLASS in the cbClsExtra or cbWndExtra parameter.
How can the custom control get the values of these 2 parameters?


I know i can also send the color value with a message to the custom
control, but that is not what i want. I have found a function named :
GetWindowLong , but i don't know how to use this function and if it
is the right one to get those 2 parameters.



Answers :

Get/SetClassLong and Get/SetWindowLong are the right functions to access
this data. Using them is pretty straigtforward - the second parameter
is an offset to where you want to start reading data. If you allocate
8 extra bytes in cbWndExtra you would access them like this:

lValue1 = GetWindowLong(hwnd, 0);
lValue2 = GetWindowLong(hwnd, 4);


and the same w/ cbClsExtra and GetClassLong. Win32 also provides a bunch
of predefined values like GCL_STYLE and GCL_WNDPROC which give you access
to default data that is available for all windows and window classes. All
of the default values are negative numbers so as not to clash with accessing
extra user data bytes.


Answer2

You're not using the Extra parameters correctly. Those
values are used to reserve extra bytes in the block of
memory associated with the class or window. You can
then store things later in that location with SetWindowLong()
or SetClassLong() and retrieve with GetWindowLong()
or GetClassLong().

So if you say:
WNDCLASS wc;
wc.cbWndExtra=4;


then it reserves an extra 4 bytes for you. That's big enough
to hold a pointer to a block of memory if you so desire.
If all you have is a COLORREF, then 4 bytes will handle
that nicely. For clarity in your code, you should use:
wc.cbWndExtra=sizeof(COLORREF);


When you call CreateWindow(), you can pass a 4-byte
value as the last parameter, i.e.:
CreateWindow("mycustomcontrol",.....,RGB(50,100,150));


At WM_CREATE, you pick up the COLORREF and store it:
case WM_CREATE:
{LPCREATESTRUCT cs=(LPCREATESTRUCT)lparam;
COLORREF colour=cs->lpCreateParams;
SetWindowLong(hwnd,0,colour); //offset 0
}
break;


at any given time, you can retrieve it with:
case WM_someothermessage:
COLORREF colour=GetWindowLong(hwnd,0); //offset 0
break;


The classic book for learning Windows programming is
"Programming Windows 95" by Charles Petzold (Microsoft
Press). Almost everyone recommends this book, as do I.
It has excellent examples - source & .exe included.

Labels:

Client rectangle size and Meta files

GetClientRect()
function for determining the dimensions of the client area. There's nothing really wrong with this function, but it's a bit inefficient to call it every time you need to use this information. A much better method for determining the size of a window's client is to process the WM_SIZE message within your window procedure. Windows sends a WM_SIZE message to a window procedure whenever the size of the window changes. The lParam variable passed to the window procedure contains the width of the client area in the low word and the height in the high word. To save these dimensions, you'll want to define two static variables in your window procedure:


static int cxClient, cyClient ;

Like cxChar and cyChar, these variables are defined as static because they are set while processing one message and used while processing another message. You handle the WM_SIZE method like so:


case WM_SIZE:
cxClient = LOWORD (lParam) ;
cyClient = HIWORD (lParam) ;
return 0 ;



How can we use meta files in VC++ SDK programming :

Old Meta file usage :
-----------------------------

1.
HDC hdcMeta;
HMETAFILE hmf;

hdcMeta = CreateMetaFile (NULL);
MoveToEx (hdcMeta, 0, 0, NULL) ;
LineTo (hdcMeta, 100, 100) ;
hmf = CloseMetaFile(hdcMeta);


2. For Displaying Meta file :
--------------------------------------

PlayMetaFile(hdc, hmf);


Enhanced Meta files :
----------------------------------

static HENHMETAFILE hemf ;
HDC hdc, hdcEMF ;

hdcEMF = CreateEnhMetaFile (NULL, NULL, NULL, NULL) ; or

hdcEMF = CreateEnhMetaFile (NULL, TEXT ("emf2.emf"), NULL,
TEXT ("EMF2\0EMF Demo #2\0")) ;



Rectangle (hdcEMF, 100, 100, 200, 200) ;

hemf = CloseEnhMetaFile (hdcEMF) ;


Displaying Meta File :
------------------------------

GetClientRect (hwnd, &rect) ;

rect.left = rect.right / 4 ;
rect.right = 3 * rect.right / 4 ;
rect.top = rect.bottom / 4 ;
rect.bottom = 3 * rect.bottom / 4 ;

PlayEnhMetaFile (hdc, hemf, &rect) ;

Labels:

Tamil Font Problem

Hi,

I faced the tamil font problem...

So I installed the ekalappai software ... Afterwards

I got the tamil fonts...

Check it out ....


Regards
Sundara rajan.A

Tuesday, April 17, 2007

My Mirror Image

To learn the things...

1.Directshow with Quicktime API

Prepare about urself :


List of things done in WMINTF :

1.Implemented the quicktime video compression
2.Implemented the writing of H264 compressed data to an AVI file
3.writing audio and video to an avi file...
4.DVB apis to write video data to the DVB card and receive video data from DVB card
5.Implemented the Null Encoder

List of things done in WMCsApp :


I developed the following Directshow filters :

1.Bitmap Overlay
2.Blur
3.Channel blur
4.ColorbalanceRGB
5.ColorToGray
6.Inverse
7.Mirror
8.Morphology
9.Overlay transform
10.Resize
11.RGB Scaling
12.Time code burning
13.Zoom
14.Shift Channels
15.Gamma correction
16.Emboss and Edge Detection
17.Tiles filter
18.Mask filter
19. Rotate 90 degree
20.Vortex filter
21.Rotation filter
22.Sepia filter
23.Brightness filter
24.Random Jitter filter
25.Water Effect filter
26.Swirl filter
27.Sphere filter
28.Timewarp filter
29.Pixellate filter
30.Posterize filter
31.Threshold filter
32.Solarize filter
33.Saturation filter
34.hue filter
35.Luminance filter
36.contrast filter

Two input pin and one output pin filter :

37.Absolute Difference filter
38.Dyadic logic filter
39.Dyadic Arithmetic filter
40.Video Overlay filter

I have well versed with OpenCV and GDI+ .

Technologies known :

.NET, C#, VC++ , Directshow,OpenCV and GDI +


you need to organize things you learn yourself mentally.

To categorize the things which is needed first which is needed last...

and which is needed essentially like that...


Well versed in C# Windows Services

I have an experience in Hosting Remoting Server as windows services...

Monday, April 16, 2007

The 3-Tier architecture has the following 3-tiers.
1. Presentation Tier
2. Application Tier/Logic Tier/Business Logic Tier
3. Data Tier

we can call it in different way as follows :

1.Presentation Logic
2.Application logic or Business Logic
3.Data Logic

CSharp Windows Services

Create a Normal C# windows Services application :
-----------------------------------------------------

After creating the application


add the installer to it...


To add installers to your service application :
-----------------------------------------------

In Solution Explorer, access Design view for the service for which you want to add an installation component.
Click anywhere within the designer's surface.
In the Description area of the Properties window, click the Add Installer link.
A new class, ProjectInstaller, and two installation components, ServiceProcessInstaller and ServiceInstaller, are added to your project, and property values for the service are copied to the components.

Click the ServiceInstaller component and verify that the value of the ServiceName property is set to the same value as the ServiceName property on the service itself.
To determine how your service will be started, click the ServiceInstaller component and set the StartType property to the appropriate value.Value Result
Manual The service must be manually started after installation. For more information, see Starting Services.
Automatic The service will start by itself whenever the computer reboots.
Disabled The service cannot be started.

To determine the security context in which your service will run, click the ServiceProcessInstaller component and set the appropriate property values. For more information, see Specifying the Security Context for Services.
Override any methods for which you need to perform custom processing. For more information, see Overriding Default Methods on Installation Components.
Perform steps 1 through 6 for each additional service in your project.
Note For each additional service in your project, you must add an additional ServiceInstaller component to the project's ProjectInstaller class. The ServiceProcessInstaller component added in step three works with all of the individual service installers in the project.
Create your setup project and custom action to deploy and install your service. For more information on setup projects, see Setup Projects. For more information on custom actions, see Walkthrough: Creating a Custom Action.


After the installer is added to our windows services,

I selected the ServiceProcessInstaller1 control's properties...

I modified the "Account" property of the serviceProcessInstaller1 as "LocalService".


ServerInstaller1 control have the "name" property which holds the name of the service listed in "Administrative tools ".



//and then run the service from the


InstallUtil :
-----------------

In command prompt


installutil windowsServices1.exe //Install the services in Service Control manager
installutil -u windowsServices1.exe // Uninstall the services in Service control manager...










Starting the Service Manually :
----------------------------------

There are several ways you can start a service that has its StartType process set to Manual — from Server Explorer, from the Windows Services Control Manager, or from code. It is important to note that not all of these methods actually start the service in the context of the Services Control Manager; the Server Explorer and programmatic methods of starting the service actually manipulate the controller.

To manually start a service from Server Explorer

In Server Explorer, add the server you want if it is not already listed. For more information, see Accessing and Initializing Server Explorer.
Note The Servers node of the Server Explorer is not available in the Standard Edition of Visual Basic and Visual C# .NET. For more information, see Visual Basic Standard Edition Features or Visual C# Standard Edition Features.
Expand the Services node, and then locate the service you want to start.
Right-click the name of the service, and click Start.
To manually start a service from Services Control Manager

Open the Services Control Manager by doing one of the following:
In Windows 2000 Professional, right-click My Computer on the desktop, and then click Manage. In the dialog box that appears, expand the Services and Applications node.
- or -

In Windows 2000 Server, click Start, point to Programs, click Administrative Tools, and then click Services.
Note In Windows NT version 4.0, you can open this dialog box from Control Panel.
You should now see your service listed in the Services section of the window.

Select your service in the list, right-click it, and then click Start.



To See more about Windows Services :
--------------------------------------
www.c-sharpcorner.com
http://www.c-sharpcorner.com/UploadFile/prvn_131971/BirthdayWishScheduler02022006012557AM/BirthdayWishScheduler.aspx

Labels:

Aggregation and Association

Object Association :


In object association, one object is formed from the union of two or more other objects.

In object-oriented programming, there are two kinds of object association, aggregation and composition. Both intend to do the same thing, build "large" objects from "smaller" ones. They differ only conceptually, from an object system perspective.


Composition :
--------------
composite types are datatypes which can be constructed in a programming language out of that language's primitive types and

other composite types. The act of constructing a composite type is known as composition.

struct Account
{
int account_number;
char *first_name;
char *last_name;
float balance;
};


Recursive Composition :
------------------------

struct bintree {
struct bintree *left, *right;
// some data
};





Aggregation :
-------------

Composited (composed) objects are called fields, items, members or attributes, and the resulting composition a structure, record, tuple,

user-defined type (UDT), or composite type. The terms usually vary across languages. Fields are given a unique name so that each one can be distinguished from the others.

Sometimes an issue of ownership arises: when a composition is destroyed, should objects belonging to it be destroyed as well?
If not, the case is sometimes called aggregation.

Labels:

Friday, April 13, 2007

Texture mapping :

Background Image :



Texture Image :





Output Image :



Texture image is converted into Gray scale and blended with the background image.

Texture mapping

texture mapping sample...


private void button1_Click(object sender, System.EventArgs e)
{
//ApplyTexture()

Image bgImage = new Bitmap("D:\\calvin.jpg");
Image textureImg = new Bitmap("D:\\texture1.jpg");
ApplyTexture(bgImage,textureImg,(float)1.0);


}



// using System.Drawing.Imaging;
// Modifies the ORIGINAL bitmap
// textureTransparency has to be between 0 and 1,
// with 0 being the least transparent, and 1 the most transparent
public void ApplyTexture (Image img, Image texture,
float textureTransparency)
{
if ( (img ==null) || (texture==null) )
throw new ArgumentNullException();
if (textureTransparency < 0 || textureTransparency > 1)
throw new ArgumentOutOfRangeException(
"Value must be between 0 and 1.");

// Convert the texture to grayscale before using it
MakeImageGrayscale (texture);


Bitmap txtr = new Bitmap(texture);
BitmapData bmData = txtr.LockBits(new Rectangle(0, 0,
txtr.Width, txtr.Height),
ImageLockMode.ReadWrite, PixelFormat.Format32bppArgb);
int stride = bmData.Stride;
System.IntPtr Scan0 = bmData.Scan0;
// Adjust the alpha value of each pixel in the texture bitmap.
// The white-most pixels will have the the
// most transparency (highest alpha),
// and the dark-most pixels will have the least transparency.
unsafe
{
byte * p = (byte *)(void *)Scan0;
int nOffset = stride - txtr.Width*4;


for (int y=0; y < txtr.Height; ++y)
{
for (int x=0; x < txtr.Width; ++x)
{
// p[3] is alpha - array is BGRA (blue, green ,
// red, alpha)
p[3] = (byte)(p[0] * textureTransparency);
p += 4; // Move to the next pixel
}
p += nOffset;
}
}
txtr.UnlockBits(bmData);

Graphics gr = Graphics.FromImage(img);
TextureBrush myBrush = new TextureBrush(txtr);
gr.FillRectangle (myBrush,0,0,img.Width, img.Height);

Graphics graphics = this.CreateGraphics();
graphics.DrawImage(img,0,0,img.Width,img.Height);

graphics.Dispose();
myBrush.Dispose();
gr.Dispose();

txtr.Dispose();
}



// Converts the image to grayscale
public void MakeImageGrayscale (Image img)
{
ColorMatrix cMatrix = new ColorMatrix (
new float[][] {
new float[] {0.299F, 0.299F, 0.299F, 0, 0},
new float[] {0.587F, 0.587F, 0.587F, 0, 0},
new float[] {0.114F, 0.114F, 0.114F, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {0, 0, 0, 1, 0}});

ImageAttributes imageAttrib = new ImageAttributes();
imageAttrib.SetColorMatrix(cMatrix);

Graphics gr = Graphics.FromImage (img);
// Apply the grayscale image attribute
gr.DrawImage (img, new Rectangle(0, 0, img.Width, img.Height),
0, 0, img.Width, img.Height, GraphicsUnit.Pixel,
imageAttrib);
gr.Dispose();
}
}

Thursday, April 12, 2007

GDI+ classes

Microsoft® Windows® GDI+ provides the following classes:

AdjustableArrowCap
Bitmap
BitmapData
Blur
BrightnessContrast
Brush
CachedBitmap
CharacterRange
Color
ColorBalance
ColorCurve
ColorLUT
ColorMatrixEffect
CustomLineCap
Effect
EncoderParameter
EncoderParameters
Font
FontCollection
FontFamily
GdiplusBase
Graphics
GraphicsPath
GraphicsPathIterator
HatchBrush
HueSaturationLightness
Image
ImageAttributes
ImageCodecInfo
ImageItemData
InstalledFontCollection
Levels
LinearGradientBrush
Matrix
Metafile
MetafileHeader
PathData
PathGradientBrush
Pen
Point
PointF
PrivateFontCollection
PropertyItem
Rect
RectF
RedEyeCorrection
Region
Sharpen
Size
SizeF
SolidBrush
StringFormat
TextureBrush
Tint

Labels:

Wednesday, April 11, 2007

MPEG documents freely on the net :

To learn stuffs in MPEG look at 14996 :

http://isotc.iso.org/livelink/livelink/fetch/2000/2489/Ittf_Home/PubliclyAvailableStandards.htm

Labels:

My Grid Control... CSharp...

None of the Grid control is suitable for my need :
====================================
My requirement is ...

1.Each and every control must be able to set back and fore color...
2. grid control must supports the merging of cells...
3. grid control must supports separate font style for each and every cells in the grid...
4.Allows Dynamic add and removal of the cell...
5. it is also possible to clear the values in a cell...

Before that I had seen the sourcegrid control in a codeproject.com article. But it doesnt support dynamic adding and removal of the cell...


I have developed my user control from panel class...if any one meets any article related to this ...
please let me know...



I derived my grid control from the System.Windows.Forms.Panel class...

I got flickering in the control So I overload the things as follows :

class CustomControlPanel : System.Windows.Forms.Panel
{

public CustomControlPanel()
{

this.SuspendLayout();
base.AutoScroll = true;
SetStyle(ControlStyles.UserPaint,true);
SetStyle(ControlStyles.AllPaintingInWmPaint,true);
SetStyle(ControlStyles.DoubleBuffer,true); //This will disables the flickering

this.UpdateStyles();
this.ResumeLayout(false);
}

}

Labels:

Calculate the string size for the specified font

Graphics g = this.CreateGraphics();
Font font = font = new System.Drawing.Font("Microsoft Sans Serif", 8.25F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((System.Byte)(0)));

SizeF textSize = g.MeasureString("sundar",font);
MessageBox.Show("Width ="+ textSize.Width + " :" + "Height :" + textSize.Height);

we can use GetTextExtentPoint32() in case of VC++...

Labels:

Thursday, April 05, 2007

Adding file Dialog to the file source filter

Win32 API :
ChooseFont() - Display Font Dialog
ChooseColor() - Display color dialog
GetOpenFileName() -Display File dialog

Learnt today...:


1.Added the file dialog to the Image Source Filter

1.Before creating an instance of the filter, we must prompt the user for selecting the file from File Dialog...
2.So i displayed the File Dialog before the filter's CreateInstance() fn...
3.Initialize the filter intsance in constructor initialization like

CMySourceStream::CMySourceStream(HRESULT* phr,CSourceFilter* pFilter,LPCWSTR pPinName):CSourceStream(NAME("SourceStream"),phr,pFilter,pPinName),
m_pFilter((CSourceFilter*)pFilter)
{
}

if we initialized the filter instance like this, I faced the problem that the filter instance is already locked ...
So use the above approach...
CMySourceStream::CMySourceStream(HRESULT* phr,CSourceFilter* pFilter,LPCWSTR pPinName):CSourceStream(NAME("SourceStream"),phr,pFilter,pPinName)
{
m_pFilter = (CSourceFilter*)pFilter;)
}

4.Display the file dialog...


OPENFILENAME ofn;
char szFileName[MAX_PATH]="";
char *szFile;
szFile = new char[260];

int i =0,j = 0;

ZeroMemory(&ofn, sizeof(ofn));
ofn.lStructSize = sizeof(ofn); // SEE NOTE BELOW
ofn.hwndOwner = ::GetActiveWindow();
ofn.lpstrFilter = "Bitmap(*.bmp)\0*.bmp\0jpg Files (*.jpg)\0*.jpg\0All Files (*.*)\0*.*\0";
ofn.lpstrFile = szFileName;
ofn.nMaxFile = MAX_PATH;
ofn.Flags = OFN_EXPLORER | OFN_FILEMUSTEXIST | OFN_HIDEREADONLY;
ofn.lpstrDefExt = "bmp";

if(GetOpenFileName(&ofn)) //Displays file dialog...
{
// Do something useful with the filename stored in szFileName
//if(szFileName))
//OutputDebugString(szFileName);
while(szFileName[i] != NULL)
{
szFile[j] = szFileName[i];
if (szFileName[i] == '\\')
{
j= j+1;
szFile[j] = '\\';
}
i = i + 1;
j = j + 1;
}
szFile[i + 1] = NULL;
OutputDebugString(szFile);
return szFile;
}
else
{
return NULL;
}
}


if we are not assigned the data

char szFileName[MAX_PATH]="";
ofn.lpstrFile = szFileName;

the szFileName must be assigned some blank values otherwise the file dialog will not be shown

on the screen...

Labels:

Wednesday, April 04, 2007

Rendering ASF files

For rendering asf file, we have to use ASF reader as file source...


IBaseFilter* m_pReader;
IFileSourceFilter* m_pFileSource;

CoCreateInstance(CLSID_WMAsfReader, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, (void **) &m_pReader);


// Add the ASF reader filter to the graph. For ASF/WMV/WMA content,
// this filter is NOT the default and must be added explicitly.

m_pGraphBuilder->AddFilter(m_pReader, L"ASF Reader");


// Set its source filename
m_pReader->QueryInterface(IID_IFileSourceFilter, (void **) &m_pFileSource);

// Attempt to load this file
m_pFileSource->Load(m_wcMediaName, NULL);


if we are rendering the output pin of the m_pReader , we will get

the display in video window...

Labels:

Handling multiple events using WaitForMultipleObjects() fn

m_hMsgEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
m_hThreadStopEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
m_hThread = CreateThread(NULL, 2048, (LPTHREAD_START_ROUTINE)pr_threadStat, (VOID *)this, NULL, &dwID);


DWORD ThreadProc(LPVOID lpParam)
{
CMwsServlet* p = (CMwsServlet*)lpParam;

p->thread();

}

void classname ::thread()
{

HANDLE hEvents[] = { m_hThreadStopEvent, m_hMsgEvent};
DWORD dwObject;
bool bNeedMoreData = false;
// int i, nCondition;

// Main thread loop
while(true)
{

dwObject = WaitForMultipleObjects(2, hEvents, FALSE, INFINITE);
if (dwObject == 0)
{
return; // Stop event: leave process
}

// If servlet event
if (dwObject == 1)
{
IMWSServletMsgPtr p = m_pServlet->GetNextMessage();

OutputDebugString("Message from Servlet: '");
OutputDebugString(m_pServlet->FormatMessage(p));
OutputDebugString("'\n");
}
}
}

Labels:

Simple program to Play the file

#include
#include

#pragma comment(lib,"strmiids")
//load the strmiids.lib

void PlayFile()
{
IGraphBuilder* pGraph = NULL;
IMediaControl* pControl = NULL;
IMediaEvent* pEvent = NULL;

HRESULT hr = S_OK;

hr = CoInitialize(NULL);

if(FAILED(hr))
{
printf("\n Could not initialize the COM runtime library");
return;
}

hr = CoCreateInstance(CLSID_FilterGraph,NULL,CLSCTX_INPROC_SERVER,IID_IFilterGraph,(void**)&pGraph);

if(FAILED(hr))
{
printf("\n Error in CoCreateInstance() fn");
return;
}


hr = pGraph->QueryInterface(IID_IMediaControl,(void**)&pControl);
hr = pGraph->QueryInterface(IID_IMediaEvent,(void**)&pEvent);


hr = pGraph->RenderFile(L"D:\\profile\\avi\\cardriving.wmv",NULL);

if(SUCCEEDED(hr))
{
//Run the Graph...
hr = pControl->Run();

if(SUCCEEDED(hr))
{
long evCode;
pEvent->WaitForCompletion(INFINITE,&evCode);


}
}


pControl->Release();
pEvent->Release();
pGraph->Release();

CoUninitialize();

}

int main(int argc, char* argv[])
{
PlayFile();
return 0;
}

Labels:

How to develope a C# component ?

How to develope a C# component ?


Create a new DLL project and add the following...
Server

using System;
using System.Runtime.InteropServices;

namespace WMDateTime
{
[Guid("D52E9FC7-6374-4d5a-B0DA-7343ABB3BF82")]
[InterfaceType(ComInterfaceType.InterfaceIsDual)]
public interface IWMDateTime
{
///
/// Event that can be specified by the caller. Set when the config changes
/// or when a new message is available.
///

[DispId(1)]
long UTCNow { get; }
}

public class WMDateTime : IWMDateTime
{
public WMDateTime() {}
public long UTCNow
{
get
{
DateTime d1 = DateTime.UtcNow;
return d1.Ticks;
}
}
}
}

in the Post Build event add the following :
"C:\Program Files\Microsoft Visual Studio .NET 2003\sdk\v1.1\Bin\"tlbexp $(TargetPath)




C++ COM client :
------------------
1. IWMDateTimePtr dt;

if (FAILED(dt.CreateInstance(__uuidof(WMDateTime))))
{
hr = HRESULT_FROM_WIN32(ERROR_DLL_NOT_FOUND);
return hr;
}
m_dtAudStartTime = dt->GetUTCNow(); //C# component's function

Labels:

Tuesday, April 03, 2007

Set As Active or Startup project

if one or more projects are attached to the same solution, we may set any of the project and select Set As Active Project or Set as Startup project...
to select the current project as main project...

this will be very useful whenever an application has multiple projects for DLL and only one EXE application..But while running the application, it will makes the DLL as a main project So,,, it simply prompts you the Exe to load the DLL..

if we want to load the Exe project as a Startup projectit will be very useful...

Monday, April 02, 2007

Bump mapping Source code

struct Point
{
int X;
int Y;
};


Constructor()
{
Point** pt;
pt = new Point*[nWidth];

for(int i = 0; i < nWidth; i++)
{
pt [i] = new Point[nHeight];

}
}

Destructor()
{
for(int i =0; i < nwidth; i++)
{
delete[] pt[i];
}

if(pt)
{
delete[] pt;
}
}


BumpMapProcessing()
{

UpdateBumpMap();

BYTE* pSrc = srcImage->ImageData;
BYTE* pDest = destImage->ImageData;
int xOffset,yOffset;


//OffsetFilter( )or OffsetFilterAbs() or OffsetFilterAntiAlias ()....


int scanline = srcImage->widthstep; or srcImage->width * srcImage->nChannels ;

for(int y = 0; y < nHeight; y++)
{
for(int x =0; x < nWidth; x++)
{

xOffset = pt[x,y].X;
yOffset = pt[x,y].Y;

if( yOffset >= 0 && yOffset < nHeight && xOffset >=0 && xOffset < nWidth)
{

pDest[0] = pSrc[ (yOffset * scanline) + ( xOffset * 3) ];

pDest[1] = pSrc[ (yOffset * scanline) + ( xOffset * 3) + 1];

pDest[2] = pSrc[ (yOffset * scanline) + ( xOffset * 3) + 2];


}

pDest = pDest + 3;
}
}


}




UpdateBumpMap()
{
//Initialize the points...

// if it is anti alias means we must use the floating points otherwise integer points...

}




//Look the following flow of Bump mapping ....

//Sphere is also called the bump mapping process...
// all the bump mapping follows this standard...



public static bool Sphere(Bitmap b, bool bSmoothing)
{
int nWidth = b.Width;
int nHeight = b.Height;

FloatPoint [,] fp = new FloatPoint[nWidth, nHeight];
Point [,] pt = new Point[nWidth, nHeight];

Point mid = new Point();
mid.X = nWidth/2;
mid.Y = nHeight/2;

double theta, radius;
double newX, newY;

for (int x = 0; x < nWidth; ++x)
for (int y = 0; y < nHeight; ++y)
{
int trueX = x - mid.X;
int trueY = y - mid.Y;
theta = Math.Atan2((trueY),(trueX));

radius = Math.Sqrt(trueX*trueX + trueY*trueY);

double newRadius = radius * radius/(Math.Max(mid.X, mid.Y));

newX = mid.X + (newRadius * Math.Cos(theta));

if (newX > 0 && newX < nWidth)
{
fp[x, y].X = newX;
pt[x, y].X = (int) newX;
}
else
{
fp[x, y].X = fp[x,y].Y = 0.0;
pt[x, y].X = pt[x,y].Y = 0;
}

newY = mid.Y + (newRadius * Math.Sin(theta));

if (newY > 0 && newY < nHeight && newX > 0 && newX < nWidth)
{
fp[x, y].Y = newY;
pt[x, y].Y = (int) newY;
}
else
{
fp[x, y].X = fp[x,y].Y = 0.0;
pt[x, y].X = pt[x,y].Y = 0;
}
}

if(bSmoothing)
OffsetFilterAbs(b, pt);
else
OffsetFilterAntiAlias(b, fp);

return true;
}


public static bool OffsetFilter(Bitmap b, Point[,] offset )
{
Bitmap bSrc = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmSrc = bSrc.LockBits(new Rectangle(0, 0, bSrc.Width, bSrc.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int scanline = bmData.Stride;

System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr SrcScan0 = bmSrc.Scan0;

unsafe
{
byte * p = (byte *)(void *)Scan0;
byte * pSrc = (byte *)(void *)SrcScan0;

int nOffset = bmData.Stride - b.Width*3;
int nWidth = b.Width;
int nHeight = b.Height;

int xOffset, yOffset;

for(int y=0;y < nHeight;++y)
{
for(int x=0; x < nWidth; ++x )
{
xOffset = offset[x,y].X;
yOffset = offset[x,y].Y;

if (y+yOffset >= 0 && y+yOffset < nHeight && x+xOffset >= 0 && x+xOffset < nWidth)
{
p[0] = pSrc[((y+yOffset) * scanline) + ((x+xOffset) * 3)];
p[1] = pSrc[((y+yOffset) * scanline) + ((x+xOffset) * 3) + 1];
p[2] = pSrc[((y+yOffset) * scanline) + ((x+xOffset) * 3) + 2];
}

p += 3;
}
p += nOffset;
}
}

b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);

return true;
}

public static bool OffsetFilterAntiAlias(Bitmap b, FloatPoint[,] fp)
{
Bitmap bSrc = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmSrc = bSrc.LockBits(new Rectangle(0, 0, bSrc.Width, bSrc.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int scanline = bmData.Stride;

System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr SrcScan0 = bmSrc.Scan0;

unsafe
{
byte * p = (byte *)(void *)Scan0;
byte * pSrc = (byte *)(void *)SrcScan0;

int nOffset = bmData.Stride - b.Width*3;
int nWidth = b.Width;
int nHeight = b.Height;

double xOffset, yOffset;

double fraction_x, fraction_y, one_minus_x, one_minus_y;
int ceil_x, ceil_y, floor_x, floor_y;
Byte p1, p2;

for(int y=0;y < nHeight;++y)
{
for(int x=0; x < nWidth; ++x )
{
xOffset = fp[x,y].X;
yOffset = fp[x,y].Y;

// Setup

floor_x = (int)Math.Floor(xOffset);
floor_y = (int)Math.Floor(yOffset);
ceil_x = floor_x + 1;
ceil_y = floor_y + 1;
fraction_x = xOffset - floor_x;
fraction_y = yOffset - floor_y;
one_minus_x = 1.0 - fraction_x;
one_minus_y = 1.0 - fraction_y;

if (floor_y >= 0 && ceil_y < nHeight && floor_x >= 0 && ceil_x < nWidth)
{
// Blue

p1 = (Byte)(one_minus_x * (double)(pSrc[floor_y * scanline + floor_x * 3]) +
fraction_x * (double)(pSrc[floor_y * scanline + ceil_x * 3]));

p2 = (Byte)(one_minus_x * (double)(pSrc[ceil_y * scanline + floor_x * 3]) +
fraction_x * (double)(pSrc[ceil_y * scanline + 3 * ceil_x]));

p[x * 3 + y*scanline] = (Byte)(one_minus_y * (double)(p1) + fraction_y * (double)(p2));

// Green

p1 = (Byte)(one_minus_x * (double)(pSrc[floor_y * scanline + floor_x * 3 + 1]) +
fraction_x * (double)(pSrc[floor_y * scanline + ceil_x * 3 + 1]));

p2 = (Byte)(one_minus_x * (double)(pSrc[ceil_y * scanline + floor_x * 3 + 1]) +
fraction_x * (double)(pSrc[ceil_y * scanline + 3 * ceil_x + 1]));

p[x * 3 + y*scanline + 1] = (Byte)(one_minus_y * (double)(p1) + fraction_y * (double)(p2));

// Red

p1 = (Byte)(one_minus_x * (double)(pSrc[floor_y * scanline + floor_x * 3 + 2]) +
fraction_x * (double)(pSrc[floor_y * scanline + ceil_x * 3 + 2]));

p2 = (Byte)(one_minus_x * (double)(pSrc[ceil_y * scanline + floor_x * 3 + 2]) +
fraction_x * (double)(pSrc[ceil_y * scanline + 3 * ceil_x + 2]));

p[x * 3 + y*scanline + 2] = (Byte)(one_minus_y * (double)(p1) + fraction_y * (double)(p2));
}
}
}
}

b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);

return true;
}


public static bool OffsetFilterAbs(Bitmap b, Point[,] offset )
{
Bitmap bSrc = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmSrc = bSrc.LockBits(new Rectangle(0, 0, bSrc.Width, bSrc.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int scanline = bmData.Stride;

System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr SrcScan0 = bmSrc.Scan0;

unsafe
{
byte * p = (byte *)(void *)Scan0;
byte * pSrc = (byte *)(void *)SrcScan0;

int nOffset = bmData.Stride - b.Width*3;
int nWidth = b.Width;
int nHeight = b.Height;

int xOffset, yOffset;

for(int y=0;y < nHeight;++y)
{
for(int x=0; x < nWidth; ++x )
{
xOffset = offset[x,y].X;
yOffset = offset[x,y].Y;

if (yOffset >= 0 && yOffset < nHeight && xOffset >= 0 && xOffset < nWidth)
{
p[0] = pSrc[(yOffset * scanline) + (xOffset * 3)];
p[1] = pSrc[(yOffset * scanline) + (xOffset * 3) + 1];
p[2] = pSrc[(yOffset * scanline) + (xOffset * 3) + 2];
}

p += 3;
}
p += nOffset;
}
}

b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);

return true;
}

Kernel Objects Vs GDI objects

kernel objects :

1.several types of kernel objects, such as access token objects, event objects, file objects, file-mapping objects, I/O completion port objects, job objects, mailslot objects, mutex objects, pipe objects, process objects, semaphore objects, thread objects, and waitable timer

2.Each kernel object is simply a memory block allocated by the kernel and is accessible only by the kernel. This memory block is a data structure whose members maintain information about the object.

3.Kernel objects are owned by the kernel, not by a process.

4.kernel object can outlive the process that created it.

5.The kernel knows how many processes are using a particular kernel object because each object contains a usage count.

6.usage count is one of the data members common to all kernel object types. When an object is first created, its usage count is set to 1. Then when another process gains access to an existing kernel object, the usage count is incremented. When a process terminates, the kernel automatically decrements the usage count for all the kernel objects the process still has open. If the object's usage count goes to 0, the kernel destroys the object. This ensures that no kernel object will remain in the system if no processes are referencing the object.

7.Kernel objects can be protected with a security descriptor.

8.A security descriptor describes who created the object, who can gain access to or use the object, and who is denied access to the object. Security descriptors are usually used when writing server applications; you can ignore this feature of kernel objects if you are writing client-side applications.

9.In addition to kernel objects, your application might use other types of objects, such as menus, windows, mouse cursors, brushes, and fonts. These objects are User objects or Graphics Device Interface (GDI) objects, not kernel objects.


10.How can we identify the object is kernel object or GDI Object ?

The easiest way to determine whether an object is a kernel object is to examine the function that creates the object. Almost all functions that create kernel objects have a parameter that allows you to specify security attribute information, as did the CreateFileMapping function shown earlier.

11.When a process is initialized, the system allocates a handle table for it. This handle table is used only for kernel objects, not for User objects or GDI objects. The details of how the handle table is structured and managed are undocumented.


12.The structure of a process's handle table


Index Pointer to Kernel Object Memory Block Access Mask (DWORD of Flag Bits) Flags (DWORD of Flag Bits)
1 0x???????? 0x???????? 0x????????
2 0x???????? 0x???????? 0x????????
… … …

13.When a process first initializes, its handle table is empty.

14. Difference between CMutex and CSemaphore ?

Shared memory with Single lock is called CMutex
Shared memory with multi lock is called CSemaphore.

15.Closing a Kernel Object - BOOL CloseHandle(HANDLE hobj);

16.Changing a Handle's Flags - call the SetHandleInformation function

Labels:

Learnt things

Two things I learned today ...
1. CRectTracker...

Here we want to track the rectangle by the mouse...
Depends upon the events the rectangle must be increase or decreased in size and
it must allow the user for drag and drop events...

It is used to display OLE dialogs... like MS Excel in a MS word

Make use of it to ur own rectangle drag and drop events...

2. what can I do with Thread exception dialog ?

Like System.Threading.ThreadAbortException that displays the thread was hunged error...



try
{
}
catch(System.Threading.ThreadAbortException e)
{
}
catch(Exception e)
{
MessageBox(e.ToString())
}


3.C# list view control

within SelectedIndex_changed event, we displayed the selected item...

selectedIndex_changed()

{
lstView.SelectedItems[0].Text // Displays an error at some time

}

So before accessing it we have to check the condition like

selectedIndex_changed()

{

if(lstView.SelectedItems.Count > 0)
{
lstView.SelectedItems[0].Text // Displays an error at some time
}

}

This will not gives an error....


SelectedIndex_changed () fn called even when the selected index is deselected...

At that time, this event will be fired.. So we are referring the

Selecteditems[0] which is invalid....