Friday, February 24, 2012
how to calculate hours between 2 dates but excluding weekends ?
and Ticket.solved_date) but excluding weekends ?
This function calculates hours only
DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
Thanks!No easy way! I have to do something like this myself next week.
You have to decide whether this is going to be done in SQL or VB. I'd
suggest VB will be the easiest to code and depending on the volumes of
data involved may be best.
So assuming VB, I'd suggest the datediff function you have, then work
out how many weekends in the daterange, you can use the weekday
function to tell what day you are starting on, subtract the number of
weekends * 48 and you'll be someway towards it.
--
Regards
Chris
agenda9533 wrote:
> Looking for way&logic to calculate hours between 2 dates
> (Ticket.date_opened and Ticket.solved_date) but excluding weekends ?
> This function calculates hours only
> DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
> Thanks!|||See my reply to your previous post.
--
HTH,
---
Teo Lachev, MVP, MCSD, MCT
"Microsoft Reporting Services in Action"
"Applied Microsoft Analysis Services 2005"
Home page and blog: http://www.prologika.com/
---
"agenda9533" <agenda9533@.discussions.microsoft.com> wrote in message
news:5085DC03-FEE0-4934-9EFE-12BDE708F139@.microsoft.com...
> Looking for way&logic to calculate hours between 2 dates
> (Ticket.date_opened
> and Ticket.solved_date) but excluding weekends ?
> This function calculates hours only
> DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
> Thanks!
>|||Perhaps some T-SQL?
I plan on trying this out when I get the time:
http://www.aspfaq.com/show.asp?id=2453
No holidays though!
agenda9533 wrote:
> Looking for way&logic to calculate hours between 2 dates (Ticket.date_opened
> and Ticket.solved_date) but excluding weekends ?
> This function calculates hours only
> DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
> Thanks!
>
How to calculate hours between 2 dates but excluding weekends ?
and Ticket.solved_date) but excluding weekends ?
This function calculates hours only
DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
Thanks!You need to create a custom VB.NET function that takes the two dates, sth
like:
Dim DaysPassed As Integer = DateDiff(DateInterval.Day, d1, d2)
Dim SaturdaysPassed As Integer = DateDiff(DateInterval.WeekOfYear,
d1, d2, FirstDayOfWeek.Sunday) 'Using Sunday because it means that the
Saturday has actually fully passed by
Dim SundaysPassed As Integer = DateDiff(DateInterval.WeekOfYear, d1,
d2, FirstDayOfWeek.Monday)
Dim HoursPassed As Integer = (DaysPassed - SaturdaysPassed -
SundaysPassed) * 24
--
HTH,
---
Teo Lachev, MVP, MCSD, MCT
"Microsoft Reporting Services in Action"
"Applied Microsoft Analysis Services 2005"
Home page and blog: http://www.prologika.com/
---
"agenda9533" <agenda9533@.discussions.microsoft.com> wrote in message
news:F4E3A740-68A1-4547-BE60-F034A964554F@.microsoft.com...
> Looking for way&logic to calculate hours between 2 dates
> (Ticket.date_opened
> and Ticket.solved_date) but excluding weekends ?
> This function calculates hours only
> DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
> Thanks!
>|||Wow! Thanks a lot!
"Teo Lachev [MVP]" wrote:
> You need to create a custom VB.NET function that takes the two dates, sth
> like:
> Dim DaysPassed As Integer = DateDiff(DateInterval.Day, d1, d2)
> Dim SaturdaysPassed As Integer = DateDiff(DateInterval.WeekOfYear,
> d1, d2, FirstDayOfWeek.Sunday) 'Using Sunday because it means that the
> Saturday has actually fully passed by
> Dim SundaysPassed As Integer = DateDiff(DateInterval.WeekOfYear, d1,
> d2, FirstDayOfWeek.Monday)
> Dim HoursPassed As Integer = (DaysPassed - SaturdaysPassed -
> SundaysPassed) * 24
> --
> HTH,
> ---
> Teo Lachev, MVP, MCSD, MCT
> "Microsoft Reporting Services in Action"
> "Applied Microsoft Analysis Services 2005"
> Home page and blog: http://www.prologika.com/
> ---
> "agenda9533" <agenda9533@.discussions.microsoft.com> wrote in message
> news:F4E3A740-68A1-4547-BE60-F034A964554F@.microsoft.com...
> > Looking for way&logic to calculate hours between 2 dates
> > (Ticket.date_opened
> > and Ticket.solved_date) but excluding weekends ?
> > This function calculates hours only
> > DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
> >
> > Thanks!
> >
>
>|||How to declare function?
How to pass date_opened and last_updated values?
Should it be report fields - Fields!date_opened.Value,
Fields!last_updated.Value?
"Teo Lachev [MVP]" wrote:
> You need to create a custom VB.NET function that takes the two dates, sth
> like:
> Dim DaysPassed As Integer = DateDiff(DateInterval.Day, d1, d2)
> Dim SaturdaysPassed As Integer = DateDiff(DateInterval.WeekOfYear,
> d1, d2, FirstDayOfWeek.Sunday) 'Using Sunday because it means that the
> Saturday has actually fully passed by
> Dim SundaysPassed As Integer = DateDiff(DateInterval.WeekOfYear, d1,
> d2, FirstDayOfWeek.Monday)
> Dim HoursPassed As Integer = (DaysPassed - SaturdaysPassed -
> SundaysPassed) * 24
> --
> HTH,
> ---
> Teo Lachev, MVP, MCSD, MCT
> "Microsoft Reporting Services in Action"
> "Applied Microsoft Analysis Services 2005"
> Home page and blog: http://www.prologika.com/
> ---
> "agenda9533" <agenda9533@.discussions.microsoft.com> wrote in message
> news:F4E3A740-68A1-4547-BE60-F034A964554F@.microsoft.com...
> > Looking for way&logic to calculate hours between 2 dates
> > (Ticket.date_opened
> > and Ticket.solved_date) but excluding weekends ?
> > This function calculates hours only
> > DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
> >
> > Thanks!
> >
>
>|||You create either embedded funtion in your report in VB.net or an external
assembly. From the field that needs to be validated, you pass the date
fields as input parameters. You may find the beginning of this article
helpful to get you started
(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql2k/html/ERSCstCode.asp).
--
HTH,
---
Teo Lachev, MVP, MCSD, MCT
"Microsoft Reporting Services in Action"
"Applied Microsoft Analysis Services 2005"
Home page and blog: http://www.prologika.com/
---
"agenda9533" <agenda9533@.discussions.microsoft.com> wrote in message
news:9CA39B8B-FD4B-42C9-8C21-1B78168602DB@.microsoft.com...
> How to declare function?
> How to pass date_opened and last_updated values?
> Should it be report fields - Fields!date_opened.Value,
> Fields!last_updated.Value?
> "Teo Lachev [MVP]" wrote:
>> You need to create a custom VB.NET function that takes the two dates, sth
>> like:
>> Dim DaysPassed As Integer = DateDiff(DateInterval.Day, d1, d2)
>> Dim SaturdaysPassed As Integer =>> DateDiff(DateInterval.WeekOfYear,
>> d1, d2, FirstDayOfWeek.Sunday) 'Using Sunday because it means that the
>> Saturday has actually fully passed by
>> Dim SundaysPassed As Integer = DateDiff(DateInterval.WeekOfYear,
>> d1,
>> d2, FirstDayOfWeek.Monday)
>> Dim HoursPassed As Integer = (DaysPassed - SaturdaysPassed -
>> SundaysPassed) * 24
>> --
>> HTH,
>> ---
>> Teo Lachev, MVP, MCSD, MCT
>> "Microsoft Reporting Services in Action"
>> "Applied Microsoft Analysis Services 2005"
>> Home page and blog: http://www.prologika.com/
>> ---
>> "agenda9533" <agenda9533@.discussions.microsoft.com> wrote in message
>> news:F4E3A740-68A1-4547-BE60-F034A964554F@.microsoft.com...
>> > Looking for way&logic to calculate hours between 2 dates
>> > (Ticket.date_opened
>> > and Ticket.solved_date) but excluding weekends ?
>> > This function calculates hours only
>> > DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
>> >
>> > Thanks!
>> >
>>
How to calculate hours between 2 dates but excluding weekends
This function calculates hours only
DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
Thanks!I suggest you create a function to calculate the number of weekdays between 2 dates (excluding the weekends using the datepart(dw,@.date)-function) and multiplying this by 24...
or alternativly,
calculating the weekend-days, multiplying this by 24 and subtract the datediff-result with this value
this will only be correct if you use dates which are no weekenddays themselfs ofcourse...
How to calculate hours between 2 dates but excluding weekends
This function calculates hours only
DATEDIFF(hour, Ticket.date_opened, Ticket.solved_date).
Thanks!I suggest you create a function to calculate the number of weekdays between 2 dates (excluding the weekends using the datepart(dw,@.date)-function) and multiplying this by 24...
or alternativly,
calculating the weekend-days, multiplying this by 24 and subtract the datediff-result with this value
this will only be correct if you use dates which are no weekenddays themselfs ofcourse...
How to calculate distinct count per group?
how many distinctive customers we have in each of them:
I placed this formula in group footer
:=countDistinct(Fields!Customers.Value,"Department")
but it shows me total number of customers in each row.
OlegHope u followed the following procedure:
1. Group on Department.
2. CountDistinct(Fields!Customers.Value, "<group name>")
NOTE: <group name> is the name of the group not the field on which grouping
is done.
Regards,
Saras
How To Calculate Deviation in Matrix?
Hi All,
I want to create following matrix report on SSRS.
Jan-07
Feb-07
Mar-07
Apr-07
May-07
Sales
Deviation
Sales
Deviation
Sales
Deviation
Sales
Deviation
Sales
Deviation
Here the Problem is i want calculate Deviation:
Deviation for Feb 07= Sales in Feb - Sales In Jan
Deviation For Mar 07 = Sales in Mar - Sales in Feb.
I don't know how to apply formula in Matrix:
Is anybody hele me?
Thank You.
Balwant Patel.
I do not think there is a striaght-forward formula for this since use of the Previous function is not supported in the Matrix data cell. You can either try calculating the deviation from the query itself, or implement it in the following manner using Custom Code:
Place the following VB.Net code in the code tab under report properties:
Shared Public PrevSales as Integer
Public Function CalDeviation(ColCount as Integer,CurSales as Integer) as Integer
Dim deviation as Integer
If ColCount=1 Then
PrevSales=CurSales
deviation=0
Else
deviation=CurSales-PrevSales
PrevSales=CurSales
End If
Return deviation
End Function
Then use the following call to the function in the textbox where you want to show the deviation:
=Code.CalDeviation(RunningValue(Fields!Month.Value,Count,"matrix1_Product"),Sum(Fields!Sales.Value))
Here, I have assumed the names Product, Month & Sales for the respective fields. Also, the Matrix Row Grouping is on Product & Column Grouping is on Month.
Hope this suits your purpose
-Aayush
Hi Aayush,
Thank you very much for your response...
I tried the code given by you...But when try to write the code in code window as below:
=Code.CalDeviation
the function .CalDeviation doesn't come up....
Thank you,
Regards,
Balwant Patel.
|||Hi Aayush,
The idea that you give me works now but it will not work when i view Report for Year Or Quarter .....
Please help me on this issue...
Thank you,
Regards,
Balwant Patel.
|||I guess it depends on your requirement, and how your Matrix is designed. What kind of functionality do you desire? Suppose you replace your Column Grouping from Month to either Year or Quarter, then it will work. If your column grouping changes on some condition etc, then it might be a bit more complicated.|||Hi Aayush,
Thank you very much for your response...
I did solve the problem....I applied following logic to solve the problem...In my Data Table I add Six Columns Like CurrMonthSales , PrevMonthSales , CurreQuarterSales , PrevQuarterSales , CurrYearSales , PrevYearSales...And then Updates these columns through Stored proc...And Then Based in Inscope Function...I Apply The Formula.
Thank you very much again..
Regards,
Balwant Patel.
How To Calculate Deviation in Matrix?
Hi All,
I want to create following matrix report on SSRS.
Jan-07
Feb-07
Mar-07
Apr-07
May-07
Sales
Deviation
Sales
Deviation
Sales
Deviation
Sales
Deviation
Sales
Deviation
Here the Problem is i want calculate Deviation:
Deviation for Feb 07= Sales in Feb - Sales In Jan
Deviation For Mar 07 = Sales in Mar - Sales in Feb.
I don't know how to apply formula in Matrix:
Is anybody hele me?
Thank You.
Balwant Patel.
I do not think there is a striaght-forward formula for this since use of the Previous function is not supported in the Matrix data cell. You can either try calculating the deviation from the query itself, or implement it in the following manner using Custom Code:
Place the following VB.Net code in the code tab under report properties:
Shared Public PrevSales as Integer
Public Function CalDeviation(ColCount as Integer,CurSales as Integer) as Integer
Dim deviation as Integer
If ColCount=1 Then
PrevSales=CurSales
deviation=0
Else
deviation=CurSales-PrevSales
PrevSales=CurSales
End If
Return deviation
End Function
Then use the following call to the function in the textbox where you want to show the deviation:
=Code.CalDeviation(RunningValue(Fields!Month.Value,Count,"matrix1_Product"),Sum(Fields!Sales.Value))
Here, I have assumed the names Product, Month & Sales for the respective fields. Also, the Matrix Row Grouping is on Product & Column Grouping is on Month.
Hope this suits your purpose
-Aayush
Hi Aayush,
Thank you very much for your response...
I tried the code given by you...But when try to write the code in code window as below:
=Code.CalDeviation
the function .CalDeviation doesn't come up....
Thank you,
Regards,
Balwant Patel.
|||Hi Aayush,
The idea that you give me works now but it will not work when i view Report for Year Or Quarter .....
Please help me on this issue...
Thank you,
Regards,
Balwant Patel.
|||I guess it depends on your requirement, and how your Matrix is designed. What kind of functionality do you desire? Suppose you replace your Column Grouping from Month to either Year or Quarter, then it will work. If your column grouping changes on some condition etc, then it might be a bit more complicated.|||Hi Aayush,
Thank you very much for your response...
I did solve the problem....I applied following logic to solve the problem...In my Data Table I add Six Columns Like CurrMonthSales , PrevMonthSales , CurreQuarterSales , PrevQuarterSales , CurrYearSales , PrevYearSales...And then Updates these columns through Stored proc...And Then Based in Inscope Function...I Apply The Formula.
Thank you very much again..
Regards,
Balwant Patel.
How To Calculate Database Size In Sql 2000
Regds
Rahul ManglikHi,
In the Enterprise Manager
------------
1.select database
2.Right click->View->taskpad
3.In Taskpad, select table info
Regards,
Ram ASP
How to calculate database Rate of Change?
databases. You might ask why I'm doing this. We have a new NetApp SAN, and it
has the ability to perform SQL Server snapshots for amazingly fast short-term
backup and recovery. In order to implement this, however, I need to size the
volumes appropriately and not waste a lot of space. The formula I have been
provided is this:
Space for Snapshot copies = ROC in bytes per day * number of Snapshot copies
Does anybody have any ideas as to how I might calculate ROC? I think using
DCM (database change maps) would be perfect if I could somehow obtain that
information through a fairly simple sql query. Otherwise I thought of using
tran log backup sizes, but I'm not sure how accurately that reflects actual
data change in the databases.
Any help is appreciated.
Thanks,
Aaron Sentell
Sr. DBA
Compass Group USA
It depends in they increment multiple times for the same data. If not, you
could create a database snapshot. Average the size for several days to get a
good number. However, I would throughly test when it comes to relying on a
3rd party product for database recovery.
Jason Massie
Web: http://statisticsio.com
RSS: http://feeds.feedburner.com/statisticsio
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>
|||I don't believe that for sql server ROC will be useful for properly
determining space needed for snapshots. Supposes you change one byte in one
row of one table and everything else is the same. What is key here for
snapshot size is how much physical disk space the snapshot mechanism marks
off as changed. I assure you it isn't 1 byte. Probably more like 8-64K or
possibly even more. Now, suppose you change 1 byte on every physically
minimum structure used in snapshotting. You may have changed say 100000
bytes but have to save off 6GB worth of space as changed. At least that is
my understanding.
To answer your question however, you can examine the output of sp_spaceused.
This is not guaranteed to be accurate however unless you first run dbcc
updateusage (which can be intensive on the server). On 2005 you can add up
pages on several DMVs related to storage (such as
sys.dm_db_index_physical_stats). Search the web for several canned queries
for this.
Tran log backups are not exactly equal to actual data change but can be a
gross estimate. The issue here is that the same data can get updated over
and over which will be logged entries but not necessarily be more physical
"ROC".
Kevin G. Boles
TheSQLGuru
Indicium Resources, Inc.
kgboles a earthlink dt net
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>
|||Aaron,
An extra thought. Assuming that you are running differential backups is to
calculate from the size of the differentials. A differential marks an 8
page extent (64 KB) for backing up whenever anything changes on a page in
that extent. If the SAN is also snapshotting at that granularity, you might
get an idea.
Of course, once an extent is marked, it continues in the differential bit
map only once, although it may be updated every day. Perhaps you could get
the measure of the day, by doing something like the following for a little
while:
11:00 PM - Differential backup
12:00 AM - Full backup
That way, every 11:00 PM differential would only be one day's worth of
changes. You would still need some way of estimating the velocity of
changes per extent, but you might be able to examine your work load and make
a semi-educated guess.
FWIW,
RLF
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>
How to calculate database Rate of Change?
databases. You might ask why I'm doing this. We have a new NetApp SAN, and i
t
has the ability to perform SQL Server snapshots for amazingly fast short-ter
m
backup and recovery. In order to implement this, however, I need to size the
volumes appropriately and not waste a lot of space. The formula I have been
provided is this:
Space for Snapshot copies = ROC in bytes per day * number of Snapshot copies
Does anybody have any ideas as to how I might calculate ROC? I think using
DCM (database change maps) would be perfect if I could somehow obtain that
information through a fairly simple sql query. Otherwise I thought of using
tran log backup sizes, but I'm not sure how accurately that reflects actual
data change in the databases.
Any help is appreciated.
Thanks,
Aaron Sentell
Sr. DBA
Compass Group USAIt depends in they increment multiple times for the same data. If not, you
could create a database snapshot. Average the size for several days to get a
good number. However, I would throughly test when it comes to relying on a
3rd party product for database recovery.
Jason Massie
Web: http://statisticsio.com
RSS: http://feeds.feedburner.com/statisticsio
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>|||I don't believe that for sql server ROC will be useful for properly
determining space needed for snapshots. Supposes you change one byte in one
row of one table and everything else is the same. What is key here for
snapshot size is how much physical disk space the snapshot mechanism marks
off as changed. I assure you it isn't 1 byte. Probably more like 8-64K or
possibly even more. Now, suppose you change 1 byte on every physically
minimum structure used in snapshotting. You may have changed say 100000
bytes but have to save off 6GB worth of space as changed. At least that is
my understanding.
To answer your question however, you can examine the output of sp_spaceused.
This is not guaranteed to be accurate however unless you first run dbcc
updateusage (which can be intensive on the server). On 2005 you can add up
pages on several DMVs related to storage (such as
sys.dm_db_index_physical_stats). Search the web for several canned queries
for this.
Tran log backups are not exactly equal to actual data change but can be a
gross estimate. The issue here is that the same data can get updated over
and over which will be logged entries but not necessarily be more physical
"ROC".
Kevin G. Boles
TheSQLGuru
Indicium Resources, Inc.
kgboles a earthlink dt net
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>|||Aaron,
An extra thought. Assuming that you are running differential backups is to
calculate from the size of the differentials. A differential marks an 8
page extent (64 KB) for backing up whenever anything changes on a page in
that extent. If the SAN is also snapshotting at that granularity, you might
get an idea.
Of course, once an extent is marked, it continues in the differential bit
map only once, although it may be updated every day. Perhaps you could get
the measure of the day, by doing something like the following for a little
while:
11:00 PM - Differential backup
12:00 AM - Full backup
That way, every 11:00 PM differential would only be one day's worth of
changes. You would still need some way of estimating the velocity of
changes per extent, but you might be able to examine your work load and make
a semi-educated guess.
FWIW,
RLF
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>
How to calculate database Rate of Change?
databases. You might ask why I'm doing this. We have a new NetApp SAN, and it
has the ability to perform SQL Server snapshots for amazingly fast short-term
backup and recovery. In order to implement this, however, I need to size the
volumes appropriately and not waste a lot of space. The formula I have been
provided is this:
Space for Snapshot copies = ROC in bytes per day * number of Snapshot copies
Does anybody have any ideas as to how I might calculate ROC? I think using
DCM (database change maps) would be perfect if I could somehow obtain that
information through a fairly simple sql query. Otherwise I thought of using
tran log backup sizes, but I'm not sure how accurately that reflects actual
data change in the databases.
Any help is appreciated.
Thanks,
Aaron Sentell
Sr. DBA
Compass Group USAI don't believe that for sql server ROC will be useful for properly
determining space needed for snapshots. Supposes you change one byte in one
row of one table and everything else is the same. What is key here for
snapshot size is how much physical disk space the snapshot mechanism marks
off as changed. I assure you it isn't 1 byte. Probably more like 8-64K or
possibly even more. Now, suppose you change 1 byte on every physically
minimum structure used in snapshotting. You may have changed say 100000
bytes but have to save off 6GB worth of space as changed. At least that is
my understanding.
To answer your question however, you can examine the output of sp_spaceused.
This is not guaranteed to be accurate however unless you first run dbcc
updateusage (which can be intensive on the server). On 2005 you can add up
pages on several DMVs related to storage (such as
sys.dm_db_index_physical_stats). Search the web for several canned queries
for this.
Tran log backups are not exactly equal to actual data change but can be a
gross estimate. The issue here is that the same data can get updated over
and over which will be logged entries but not necessarily be more physical
"ROC".
--
Kevin G. Boles
TheSQLGuru
Indicium Resources, Inc.
kgboles a earthlink dt net
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>|||It depends in they increment multiple times for the same data. If not, you
could create a database snapshot. Average the size for several days to get a
good number. However, I would throughly test when it comes to relying on a
3rd party product for database recovery.
--
Jason Massie
Web: http://statisticsio.com
RSS: http://feeds.feedburner.com/statisticsio
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>|||Aaron,
An extra thought. Assuming that you are running differential backups is to
calculate from the size of the differentials. A differential marks an 8
page extent (64 KB) for backing up whenever anything changes on a page in
that extent. If the SAN is also snapshotting at that granularity, you might
get an idea.
Of course, once an extent is marked, it continues in the differential bit
map only once, although it may be updated every day. Perhaps you could get
the measure of the day, by doing something like the following for a little
while:
11:00 PM - Differential backup
12:00 AM - Full backup
That way, every 11:00 PM differential would only be one day's worth of
changes. You would still need some way of estimating the velocity of
changes per extent, but you might be able to examine your work load and make
a semi-educated guess.
FWIW,
RLF
"Aaron Sentell" <Aaron Sentell@.discussions.microsoft.com> wrote in message
news:4A5BA321-B82C-4F41-A94D-5F2B5B816600@.microsoft.com...
>I need to determine the rate of change (ROC) in bytes/day for my production
> databases. You might ask why I'm doing this. We have a new NetApp SAN, and
> it
> has the ability to perform SQL Server snapshots for amazingly fast
> short-term
> backup and recovery. In order to implement this, however, I need to size
> the
> volumes appropriately and not waste a lot of space. The formula I have
> been
> provided is this:
> Space for Snapshot copies = ROC in bytes per day * number of Snapshot
> copies
> Does anybody have any ideas as to how I might calculate ROC? I think using
> DCM (database change maps) would be perfect if I could somehow obtain that
> information through a fairly simple sql query. Otherwise I thought of
> using
> tran log backup sizes, but I'm not sure how accurately that reflects
> actual
> data change in the databases.
> Any help is appreciated.
> Thanks,
> Aaron Sentell
> Sr. DBA
> Compass Group USA
>
How to calculate cumulative rolling product on a 5 day window.
REC# RETURN DESIRED RESULT
-- -- --
1 1.000859
2 0.999705
3 1.000471
4 0.994672
5 1.001945 -0.24%
6 0.999718 -0.35%
7 0.997262 -0.59%
8 0.997613 -0.88%
9 1.000932 -0.25%
10 0.999584 -0.49%
To calculate the DESIRED RESULT on REC# 5, the following calculation
is needed: (notice 5 days of returns are multiplied together)
((1.000859 * 0.999705 * 1.000471 * 0.994672 * 1.001945) - 1) * 100 = -0.24
In order to calculate the DESIRED RESULT for REC# 6 I use the same
calculation only using RETURN values from REC#'s 2 through 6.
DESIRED RESULT on REC# 7 uses RETURN values from REC#'s 3 through 7
It has to be a 5 day window of values that is why REC#'s 1-4 have no
DESIRED RESULT.
I'm trying to figure out how to do this without using a cursor....I
hate cursors.
Any ideas I've been thinking about if for a couple hours now and
haven't found a good solution yet.On Wed, 19 Sep 2007 19:45:46 -0000, Izzy wrote:
>Here's what the data looks like:
>REC# RETURN DESIRED RESULT
>-- -- --
> 1 1.000859
> 2 0.999705
> 3 1.000471
> 4 0.994672
> 5 1.001945 -0.24%
> 6 0.999718 -0.35%
> 7 0.997262 -0.59%
> 8 0.997613 -0.88%
> 9 1.000932 -0.25%
> 10 0.999584 -0.49%
>
>To calculate the DESIRED RESULT on REC# 5, the following calculation
>is needed: (notice 5 days of returns are multiplied together)
>((1.000859 * 0.999705 * 1.000471 * 0.994672 * 1.001945) - 1) * 100 =>-0.24
>
>In order to calculate the DESIRED RESULT for REC# 6 I use the same
>calculation only using RETURN values from REC#'s 2 through 6.
>
>DESIRED RESULT on REC# 7 uses RETURN values from REC#'s 3 through 7
>
>It has to be a 5 day window of values that is why REC#'s 1-4 have no
>DESIRED RESULT.
>
>I'm trying to figure out how to do this without using a cursor....I
>hate cursors.
>
>Any ideas I've been thinking about if for a couple hours now and
>haven't found a good solution yet.
Hi Izzy,
If the REC# column is guaranteed to hold consecutive numbers (i.e. no
gaps), you can use
SELECT a.RecNo, a."Return",
CASE
WHEN COUNT(b."Return") = 5
THEN (POWER(10.0, SUM(LOG10(b."Return"))) - 1) * 100
ELSE NULL
END AS Result
FROM YourTable AS a
INNER JOIN YourTable AS b
ON b.RecNo BETWEEN a.RecNo - 4 AND a.RecNo
GROUP BY a.RecNo, a."Return";
Note that I'm using POWER(10.0,SUM(LOG10(Value))) to mimic the aggregate
product function that SQL Server lacks. This technique works if all
values are >0, as in your sample. If values =0 or <0 are allwoed as
well, you'll have to use a more complicated expression to get the
aggregate product (Google for SQL Server product aggregate Itzik Ben-Gan
and you'll find it).
If RecNo values can have gaps, *and* you are on SQL Server 2005, you can
generate consecutive rownumbers by using ROW_NUMBER(). If you're still
on SQL Server 2000, you'll need some real nasty work - if you want me to
help with that, you'll have to post CREATE TABLE and SELECT statements
so that I can set up a test environment by using copy and paste.
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||I suggest you read the thread:
microsoft.public.sqlserver.programming
Sunday, July 01, 2007 7:45 AM
TSQl-Accmulations
http://tinyurl.com/yvdqbc
Look especially at these 2 solutions in the thread (which assume S2005):
Steve Dassin
Monday, July 02, 2007 10:18 PM
Itzik Ben-Gan
Tuesday, July 03, 2007 5:16 AM
(But I think Itzik Ben-Gan also shows possible S2000 solutions).
In your case (given S2005) the row definition is:
ROWS BETWEEN 4 PRECEDING AND CURRENT ROW
ie, exec SqlWindowCustomers @.from=-4,@.to=0
You can use Hugo's product aggregate to get what you want.
Easy solutions to problems like this is what Itzik Ben-Gan argued for in:
Itzik Ben-Gan and Sujata Mehta
OVER Clause and Ordered Calculations
http://www.insidetsql.com/OVER_Clause_and_Ordered_Calculations.doc
which follow the sql standard of an sql window that's quite simple and
powerful, especially for moving/running sum problems.
And here is the MS response regarding Katami (S2008):
OVER clause enhancement request - ORDER BY for aggregates
OVER clause enhancement request - ROWS and RANGE window sub-clauses
https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=254392
'This feature unfortunately did not fit into our schedule for SQL Server
2008.'
If Itzik Ben-Gan has so little pull what does that say for the rest
of us! Just how do they decide on what features to implement?
Who the hell is in charge there. Is anyone in charge? :-)
In this case I think it's fair to say the Connect(ion) is broken :-)
For more on this subject see:
http://beyondsql.blogspot.com/2007/07/dataphor-example-of-rapid-application.html
best,
www.beyondsql.blogspot.com|||On Sep 19, 4:46 pm, Hugo Kornelis
<h...@.perFact.REMOVETHIS.info.INVALID> wrote:
> On Wed, 19 Sep 2007 19:45:46 -0000, Izzy wrote:
> >Here's what the data looks like:
> >REC# RETURN DESIRED RESULT
> >-- -- --
> > 1 1.000859
> > 2 0.999705
> > 3 1.000471
> > 4 0.994672
> > 5 1.001945 -0.24%
> > 6 0.999718 -0.35%
> > 7 0.997262 -0.59%
> > 8 0.997613 -0.88%
> > 9 1.000932 -0.25%
> > 10 0.999584 -0.49%
> >To calculate the DESIRED RESULT on REC# 5, the following calculation
> >is needed: (notice 5 days of returns are multiplied together)
> >((1.000859 * 0.999705 * 1.000471 * 0.994672 * 1.001945) - 1) * 100 => >-0.24
> >In order to calculate the DESIRED RESULT for REC# 6 I use the same
> >calculation only using RETURN values from REC#'s 2 through 6.
> >DESIRED RESULT on REC# 7 uses RETURN values from REC#'s 3 through 7
> >It has to be a 5 day window of values that is why REC#'s 1-4 have no
> >DESIRED RESULT.
> >I'm trying to figure out how to do this without using a cursor....I
> >hate cursors.
> >Any ideas I've been thinking about if for a couple hours now and
> >haven't found a good solution yet.
> Hi Izzy,
> If the REC# column is guaranteed to hold consecutive numbers (i.e. no
> gaps), you can use
> SELECT a.RecNo, a."Return",
> CASE
> WHEN COUNT(b."Return") = 5
> THEN (POWER(10.0, SUM(LOG10(b."Return"))) - 1) * 100
> ELSE NULL
> END AS Result
> FROM YourTable AS a
> INNER JOIN YourTable AS b
> ON b.RecNo BETWEEN a.RecNo - 4 AND a.RecNo
> GROUP BY a.RecNo, a."Return";
> Note that I'm using POWER(10.0,SUM(LOG10(Value))) to mimic the aggregate
> product function that SQL Server lacks. This technique works if all
> values are >0, as in your sample. If values =0 or <0 are allwoed as
> well, you'll have to use a more complicated expression to get the
> aggregate product (Google for SQL Server product aggregate Itzik Ben-Gan
> and you'll find it).
> If RecNo values can have gaps, *and* you are on SQL Server 2005, you can
> generate consecutive rownumbers by using ROW_NUMBER(). If you're still
> on SQL Server 2000, you'll need some real nasty work - if you want me to
> help with that, you'll have to post CREATE TABLE and SELECT statements
> so that I can set up a test environment by using copy and paste.
> --
> Hugo Kornelis, SQL Server MVP
> My SQL Server blog:http://sqlblog.com/blogs/hugo_kornelis- Hide quoted text -
> - Show quoted text -
Thanks Hugo I think we are on the right path......but
When I run your query it returns only 0.0 for all rows.
RecNo is consecutive without gaps.
Return column is of type FLOAT.
I'm using SQL Server 2005
To set up a test:
CREATE TABLE #HIST_RETURNS
(
DAYNUM INT,
TOTAL_RETURN FLOAT
);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (1, 1.000859);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (2, 0.999705);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (3, 1.000471);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (4, 0.994672);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (5, 1.001945);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (6, 0.999718);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (7, 0.997262);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (8, 0.997613);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (9, 1.000932);
INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
VALUES (10, 0.999584);
SELECT A.DAYNUM, A.TOTAL_RETURN,
CASE
WHEN COUNT(B.TOTAL_RETURN) = 5
THEN (POWER(10.0, SUM(LOG10(B.TOTAL_RETURN))) - 1) * 100
ELSE NULL
END AS RESULT
FROM #HIST_RETURNS AS A
JOIN #HIST_RETURNS AS B ON B.DAYNUM BETWEEN A.DAYNUM - 4 AND
A.DAYNUM
GROUP BY A.DAYNUM, A.TOTAL_RETURN
ORDER BY A.DAYNUM;
DROP TABLE #HIST_RETURNS;
Thanks for the response.|||On Sep 20, 8:05 am, Izzy <israel.rich...@.gmail.com> wrote:
> On Sep 19, 4:46 pm, Hugo Kornelis
>
>
> <h...@.perFact.REMOVETHIS.info.INVALID> wrote:
> > On Wed, 19 Sep 2007 19:45:46 -0000, Izzy wrote:
> > >Here's what the data looks like:
> > >REC# RETURN DESIRED RESULT
> > >-- -- --
> > > 1 1.000859
> > > 2 0.999705
> > > 3 1.000471
> > > 4 0.994672
> > > 5 1.001945 -0.24%
> > > 6 0.999718 -0.35%
> > > 7 0.997262 -0.59%
> > > 8 0.997613 -0.88%
> > > 9 1.000932 -0.25%
> > > 10 0.999584 -0.49%
> > >To calculate the DESIRED RESULT on REC# 5, the following calculation
> > >is needed: (notice 5 days of returns are multiplied together)
> > >((1.000859 * 0.999705 * 1.000471 * 0.994672 * 1.001945) - 1) * 100 => > >-0.24
> > >In order to calculate the DESIRED RESULT for REC# 6 I use the same
> > >calculation only using RETURN values from REC#'s 2 through 6.
> > >DESIRED RESULT on REC# 7 uses RETURN values from REC#'s 3 through 7
> > >It has to be a 5 day window of values that is why REC#'s 1-4 have no
> > >DESIRED RESULT.
> > >I'm trying to figure out how to do this without using a cursor....I
> > >hate cursors.
> > >Any ideas I've been thinking about if for a couple hours now and
> > >haven't found a good solution yet.
> > Hi Izzy,
> > If the REC# column is guaranteed to hold consecutive numbers (i.e. no
> > gaps), you can use
> > SELECT a.RecNo, a."Return",
> > CASE
> > WHEN COUNT(b."Return") = 5
> > THEN (POWER(10.0, SUM(LOG10(b."Return"))) - 1) * 100
> > ELSE NULL
> > END AS Result
> > FROM YourTable AS a
> > INNER JOIN YourTable AS b
> > ON b.RecNo BETWEEN a.RecNo - 4 AND a.RecNo
> > GROUP BY a.RecNo, a."Return";
> > Note that I'm using POWER(10.0,SUM(LOG10(Value))) to mimic the aggregate
> > product function that SQL Server lacks. This technique works if all
> > values are >0, as in your sample. If values =0 or <0 are allwoed as
> > well, you'll have to use a more complicated expression to get the
> > aggregate product (Google for SQL Server product aggregate Itzik Ben-Gan
> > and you'll find it).
> > If RecNo values can have gaps, *and* you are on SQL Server 2005, you can
> > generate consecutive rownumbers by using ROW_NUMBER(). If you're still
> > on SQL Server 2000, you'll need some real nasty work - if you want me to
> > help with that, you'll have to post CREATE TABLE and SELECT statements
> > so that I can set up a test environment by using copy and paste.
> > --
> > Hugo Kornelis, SQL Server MVP
> > My SQL Server blog:http://sqlblog.com/blogs/hugo_kornelis-Hide quoted text -
> > - Show quoted text -
> Thanks Hugo I think we are on the right path......but
> When I run your query it returns only 0.0 for all rows.
> RecNo is consecutive without gaps.
> Return column is of type FLOAT.
> I'm using SQL Server 2005
> To set up a test:
> CREATE TABLE #HIST_RETURNS
> (
> DAYNUM INT,
> TOTAL_RETURN FLOAT
> );
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (1, 1.000859);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (2, 0.999705);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (3, 1.000471);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (4, 0.994672);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (5, 1.001945);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (6, 0.999718);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (7, 0.997262);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (8, 0.997613);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (9, 1.000932);
> INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> VALUES (10, 0.999584);
> SELECT A.DAYNUM, A.TOTAL_RETURN,
> CASE
> WHEN COUNT(B.TOTAL_RETURN) = 5
> THEN (POWER(10.0, SUM(LOG10(B.TOTAL_RETURN))) - 1) * 100
> ELSE NULL
> END AS RESULT
> FROM #HIST_RETURNS AS A
> JOIN #HIST_RETURNS AS B ON B.DAYNUM BETWEEN A.DAYNUM - 4 AND
> A.DAYNUM
> GROUP BY A.DAYNUM, A.TOTAL_RETURN
> ORDER BY A.DAYNUM;
> DROP TABLE #HIST_RETURNS;
> Thanks for the response.- Hide quoted text -
> - Show quoted text -
I think I figured it out, by adding more zero's to the "10.0" and
making it "10.00000".
SELECT A.DAYNUM, A.TOTAL_RETURN,
CASE
WHEN COUNT(B.TOTAL_RETURN) = 5
THEN (POWER(10.000000, SUM(LOG10(B.TOTAL_RETURN))) - 1) * 100
ELSE NULL
END AS RESULT
FROM #HIST_RETURNS AS A
JOIN #HIST_RETURNS AS B ON B.DAYNUM BETWEEN A.DAYNUM - 4 AND
A.DAYNUM
GROUP BY A.DAYNUM, A.TOTAL_RETURN
ORDER BY A.DAYNUM;|||On Sep 20, 8:17 am, Izzy <israel.rich...@.gmail.com> wrote:
> On Sep 20, 8:05 am, Izzy <israel.rich...@.gmail.com> wrote:
>
>
> > On Sep 19, 4:46 pm, Hugo Kornelis
> > <h...@.perFact.REMOVETHIS.info.INVALID> wrote:
> > > On Wed, 19 Sep 2007 19:45:46 -0000, Izzy wrote:
> > > >Here's what the data looks like:
> > > >REC# RETURN DESIRED RESULT
> > > >-- -- --
> > > > 1 1.000859
> > > > 2 0.999705
> > > > 3 1.000471
> > > > 4 0.994672
> > > > 5 1.001945 -0.24%
> > > > 6 0.999718 -0.35%
> > > > 7 0.997262 -0.59%
> > > > 8 0.997613 -0.88%
> > > > 9 1.000932 -0.25%
> > > > 10 0.999584 -0.49%
> > > >To calculate the DESIRED RESULT on REC# 5, the following calculation
> > > >is needed: (notice 5 days of returns are multiplied together)
> > > >((1.000859 * 0.999705 * 1.000471 * 0.994672 * 1.001945) - 1) * 100 => > > >-0.24
> > > >In order to calculate the DESIRED RESULT for REC# 6 I use the same
> > > >calculation only using RETURN values from REC#'s 2 through 6.
> > > >DESIRED RESULT on REC# 7 uses RETURN values from REC#'s 3 through 7
> > > >It has to be a 5 day window of values that is why REC#'s 1-4 have no
> > > >DESIRED RESULT.
> > > >I'm trying to figure out how to do this without using a cursor....I
> > > >hate cursors.
> > > >Any ideas I've been thinking about if for a couple hours now and
> > > >haven't found a good solution yet.
> > > Hi Izzy,
> > > If the REC# column is guaranteed to hold consecutive numbers (i.e. no
> > > gaps), you can use
> > > SELECT a.RecNo, a."Return",
> > > CASE
> > > WHEN COUNT(b."Return") = 5
> > > THEN (POWER(10.0, SUM(LOG10(b."Return"))) - 1) * 100
> > > ELSE NULL
> > > END AS Result
> > > FROM YourTable AS a
> > > INNER JOIN YourTable AS b
> > > ON b.RecNo BETWEEN a.RecNo - 4 AND a.RecNo
> > > GROUP BY a.RecNo, a."Return";
> > > Note that I'm using POWER(10.0,SUM(LOG10(Value))) to mimic the aggregate
> > > product function that SQL Server lacks. This technique works if all
> > > values are >0, as in your sample. If values =0 or <0 are allwoed as
> > > well, you'll have to use a more complicated expression to get the
> > > aggregate product (Google for SQL Server product aggregate Itzik Ben-Gan
> > > and you'll find it).
> > > If RecNo values can have gaps, *and* you are on SQL Server 2005, you can
> > > generate consecutive rownumbers by using ROW_NUMBER(). If you're still
> > > on SQL Server 2000, you'll need some real nasty work - if you want me to
> > > help with that, you'll have to post CREATE TABLE and SELECT statements
> > > so that I can set up a test environment by using copy and paste.
> > > --
> > > Hugo Kornelis, SQL Server MVP
> > > My SQL Server blog:http://sqlblog.com/blogs/hugo_kornelis-Hidequoted text -
> > > - Show quoted text -
> > Thanks Hugo I think we are on the right path......but
> > When I run your query it returns only 0.0 for all rows.
> > RecNo is consecutive without gaps.
> > Return column is of type FLOAT.
> > I'm using SQL Server 2005
> > To set up a test:
> > CREATE TABLE #HIST_RETURNS
> > (
> > DAYNUM INT,
> > TOTAL_RETURN FLOAT
> > );
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (1, 1.000859);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (2, 0.999705);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (3, 1.000471);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (4, 0.994672);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (5, 1.001945);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (6, 0.999718);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (7, 0.997262);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (8, 0.997613);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (9, 1.000932);
> > INSERT INTO #HIST_RETURNS (DAYNUM, TOTAL_RETURN)
> > VALUES (10, 0.999584);
> > SELECT A.DAYNUM, A.TOTAL_RETURN,
> > CASE
> > WHEN COUNT(B.TOTAL_RETURN) = 5
> > THEN (POWER(10.0, SUM(LOG10(B.TOTAL_RETURN))) - 1) * 100
> > ELSE NULL
> > END AS RESULT
> > FROM #HIST_RETURNS AS A
> > JOIN #HIST_RETURNS AS B ON B.DAYNUM BETWEEN A.DAYNUM - 4 AND
> > A.DAYNUM
> > GROUP BY A.DAYNUM, A.TOTAL_RETURN
> > ORDER BY A.DAYNUM;
> > DROP TABLE #HIST_RETURNS;
> > Thanks for the response.- Hide quoted text -
> > - Show quoted text -
> I think I figured it out, by adding more zero's to the "10.0" and
> making it "10.00000".
> SELECT A.DAYNUM, A.TOTAL_RETURN,
> CASE
> WHEN COUNT(B.TOTAL_RETURN) = 5
> THEN (POWER(10.000000, SUM(LOG10(B.TOTAL_RETURN))) - 1) * 100
> ELSE NULL
> END AS RESULT
> FROM #HIST_RETURNS AS A
> JOIN #HIST_RETURNS AS B ON B.DAYNUM BETWEEN A.DAYNUM - 4 AND
> A.DAYNUM
> GROUP BY A.DAYNUM, A.TOTAL_RETURN
> ORDER BY A.DAYNUM;- Hide quoted text -
> - Show quoted text -
Thanks a bunch for that post Hugo it was exactly what I needed.|||On 20 Sep 2007 06:05:35 -0700, Izzy wrote:
(snip)
>Thanks Hugo I think we are on the right path......but
>When I run your query it returns only 0.0 for all rows.
(...)
>To set up a test:
Hi Izzy,
Thanks for providing a repro. Interesting find!
I've seen your other post with the workaround, but I think I've found a
better way - replace 10.0 (or 10.00000) with 10e0. This results in more
precision in the results from the query:
SELECT A.DAYNUM, A.TOTAL_RETURN,
CASE
WHEN COUNT(B.TOTAL_RETURN) = 5
THEN (POWER(10e0, SUM(LOG10(B.TOTAL_RETURN))) - 1) * 100
ELSE NULL
END AS RESULT
FROM #HIST_RETURNS AS A
JOIN #HIST_RETURNS AS B
ON B.DAYNUM BETWEEN A.DAYNUM - 4 AND A.DAYNUM
GROUP BY A.DAYNUM, A.TOTAL_RETURN
ORDER BY A.DAYNUM;
My guess at the cause is that 10.0 is interpreted by the query engine as
DECIMAL(3,1). The other values are then converted from FLOAT to this
same datatype, causing you to lose quite a bit of precision. Your
workaround of 10.00000 reduced the precision loss, since you now use
DECIMAL(8,6). The 10e0 in my solution is considered to be FLOAT, so
there is no conversion required at all.
Another solution would be to explicitly write CAST(10 AS float)
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis
how to calculate cost of a trigger?
I wanna used derived fields to improve select performance of my system.
How can I calculate the trigger cost in the system?
Triggers are resource intensive objects, you should look for alternatives before implementing them. You can run the profiler to find out the cost of any code you run through SQL Server. You will find the Profiler under tools in Management studio. Hope this helps.|||I wanna usedderived fields to improve select performance of my system.How can I calculate the trigger cost in the system?
We are designing the DATA MODELand The TASK table it has two mutually exclusive fields.
WHOID > PersonID,OrganizationID
WHATID > DocumentID,EstateID,LeadID,RequirementID,
and on the Task UI we display the names of the Related EntityTypes and this causes us 10 inner/Joins to pull the data from database.
One of the designer in the team suggested putting derived fields such as
WHONAME > PersonName,OrganizationName
WHATNAME > DocumentName,EstateType,LeadName,RequirementName
So from performance perspective,We are really confused as putting triggers cause implicit locks on the system where as the other option causes 10 joins.
There is one to many relationship between WHOID and TASK(..3 related entitytypes mutually exclusive)
and also there is one to many relationship between WHATID and TASK(..7 related entitytypes mutually exclusive)
7 joins + 3 joins = 10 joins.
it is a really simple UI infact.
The Edit screen of TASK EDIT
--------------
Subject > char
FromDate > datetime
Todate > datetime
Priority > char
State > char
Email > char
Whoname> Related Person,Organization or Lead NAME
WhatName>Related Document,Estate,Requirement,Case,Oppurtunity Name ..etc
------
I would love not to have ten joins but the requirement forces us that way|||
What you need is simple DRI(declarative referenctial integrity) enabled on the tables in Management studio, so Updates will Cascade and if needed Deletes will Cascade. I know you need a DDL( data definition langauge) person in your team because a Table design comes down to files and association, while relationship is determined by upper and lower bound Cardinality. If you follow that most of your tables will disappear and what is now a JOIN will become a simple additional condition which in SQL Server can be added with the AND operator to a JOIN operation.
Some people do just DDL(data definition language) for a living. Run a search for DRI(declarative referential integerity) in SQL Server BOL (books online). Hope this helps.
|||thanks caddre.that looks interesting.
http://www.cvalde.net/document/declaRefIntegVsTrig.htm
but I have a hard time understanding how it will change my WHONAME and WHATNAME.
if the it does it would be great.
|||I could be wrong but what you have looks like objects to me because if those are two tables you need just two IDENTITY or it becomes a composite. Try the link below and download and install AdventureWorks 2005 most tables you need in a business application is in there. Look at the tables pay attention to the constraints and DRI (declarative referential integrity). Hope this helps.
http://www.microsoft.com/downloads/details.aspx?familyid=e719ecf7-9f46-4312-af89-6ad8702e4e6e&displaylang=en
|||Is there a problem with the 10 joins?
Personally, I would just create a view to simplify the select statement that has all your required joins in it. Just make sure your tables are indexed properly, and you shouldn't have an issue.
|||Hmmm I have tested it with 10 million records and the derived approach is 20X faster.
I really don't wanna use triggers but for now I'll just try the links caddre send.
|||If you aren't displaying all 10 million rows at a time within the UI, you may find it beneficial to do your limiting select first, then doing the 10 joins once your have your limited resultset back, kinda like:
SELECT ...
FROM (Your limited query here)
JOIN ...
JOIN ...
JOIN ..
although why SQL Server isn't already doing that for you is odd. A bad query plan, or inaccurate statistics most likely. Before spending the time on trying to work around SQL, I'd first run your query through the tuning wizard and see what it comes up with, you may be suprised. If you've already done that, my appologies.
You want to try an indexed view approach as well, which may give you what you are looking for. Something like:
CREATE MyView WITH SCHEMABINDING AS
SELECT (Your 10 fields here),t1.Name,t2.Name,t3.Name,t4.Name,...
FROM MyTable mt
JOIN EntityRef t1 ON (mt.key=t1.key)
...
Then create a primary key on the view, then create your indexes (Whatever you are limiting/searching on)
You'll incur much of the same overhead as triggers though, so test your results.
|||As a side note, Caddre has mentioned a lot of differing technologies, one of which is DRI. DRI by itself won't help to solve your issue, but by leveraging it, if possible, it could reduce the complexity of your database design, and increase your performance.
For example, take table1, and table2 here:
Table Name > Fields
Table1>EntityID,col1,col2,col3,col4...
Table2>EntityID,Name
Normally DRI is used to link both entityIDs in table1 and table2 in a foreign key relationship. This by itself will not gain you any performance at all, infact in many operations it will decrease overall performance substantially. However, if you redesign your tables, like so:
Table1>Name,col1,col2,col3,col4...
Table2>Name
Now by using DRI to maintain the relationship between Name (via a foreign key), then it's quite possible you will see a performance increase since you have just eliminated a join in your selects. Foreign keys can be used to "replicate" changes in the name field done in table2 to all records in table1 that use that name Now that's one VERY expensive update, but, it does insure that all instances in table1 that link to a particular name all use the exact same name. This is usually why you use identifiers in your base tables to begin with, and since those changes are very infrequent (Or should be), it's a decent trade off in performance, but you can make a pretty big mess of your database structure (and data) by taking this route if you aren't careful though. These in SQL Server are also called Cascading Deletes and Updates.
Using our above new schema, if you issue this:
UPDATE table2 SET Name='new value' where Name='old value'
SQL Server will actually do something like this:
BEGIN TRANSACTION
UPDATE table2 SET Name='new value' where Name='old value'
UPDATE table1 SET Name='new value' where Name='old value'
COMMIT TRANSACTION
With better error handling of course. If there are any errors, it's all rolled back, but it's called cascading be cause a delete in table2 would delete some records in table1, which in turn may delete some records in table3, etc.
Now with all that out of the way, you have to realize by replacing your small little ID field with a probably much larger field, sure you lose the join, but you've increased your row size. Indexes will be larger, and pages will contain less rows. Operations that require to scan large portions of your table will take longer, as well. So test a lot, on not just on what you need to do now, but think and test the ramifications on all your other queries as well.
|||I have thought another method today which might be wrong ...instead of using DRI and triggers to maintain the data I am thinking about writing a class just for the purpose of putting data and updating data into my derived field.And I will tell from code behind that I need to update in anycase the update happens in my main tables of derived coloumns.
The advantages of this approach over triggers.
1.No implicit transactions
2.easy to debug
3.Faster than triggers
4.No unneccessary updates
5.Could write code to maintain and check the alldata.
Disadvantages I see
1.Triggers are more safe in terms of integrity.
2.Less chance to forget to update the derived fields.
*I am really confused some how but the requirement of the application forces us not to use three things that will load the SQL CPU load.
A:Triggers
B.UNIONS,Exits
C.Joins
|||I fail to see how most of what you've listed as "advantages" are advantages. I would say:
Advantages:
1. No implicit transactions (All SQL Statements are an implied transaction, you haven't eliminated them, just simplified it -- But that's an advantage still)
2. Easy to debug -- For your application, if it's the only one accessing the data, yes.
3. Faster than triggers -- I doubt it. Sounds like you are going to have to make an extra round trip between the server and client to pull back the name field that you want to put in the field, which will make it many times slower, but holding locks for slightly less time. Unless you are caching the table(s) the 10 joins are getting their names from, in which case, you need to add the added application/web server memory cost as a disadvantage as well.
4. No unneccessary updates.
5. Could write code to maintain and check the alldata. -- This is not an advantage. It's a solution to the problem caused by the derived fields. It should be listed under disadvantages as "Have to write code to maintain and check the alldata".
Disadvantages:
1. Less safe in terms of integrity than triggers.
2. Possible data out of sync.
3. Increased application code, and extra logic that must be placed into every application that might need to update the data (or each routine within your application).
4. Have to write code to periodically maintain and check database integrity.
--
Please look into the indexed views I mentioned earlier, which doesn't suffer from the integrity issues, doesn't require more application logic (Other than you select from one view, and update a base table, but you're doing that from stored procedures now anyhow, right?) If the indexed views don't work for you, you could also use a BEFORE UPDATE/INSERT trigger, which aren't the nicest thing in the world to write, but keeps many of your listed advantages, and eliminates the disadvantages. (Guaranteed database integrity + no unneccessary updates + smaller lock time + faster than a client-side lookup)
As for your confusion on what you shouldn't use... (Triggers, UNIONS, Exits (did you mean EXISTS?), and joins)... Sounds like you've gotten information from someone who fell in love with a dumb record manager from the 1980's (Paradox, BTrieve, ISAM, or MySQL -- granted the latter is evolving beyond it's dumb record manager roots).
|||
I agree with Motley, because there is a reason the Object world have not created a replacement for Peter Chen 1976, the math that gave us what is now DRI(declarative referential integrity). I think you will run into data integrity problems as your application grow.
But the last part of your post is ok except JOINS because INNER JOINS with the AND operator will not eat up your CPU if you use management studio to show you execution cost and use the Profiler to optimize the cost of your SQL statement.
|||indexed views > I would Love to use them butif I am not mistaken they have lots of rules to follow and I got to buy the enterprise editon and the 4 CPU price of SQL is4 X 20.000$where the standart edition is 4 X 6.000 USD for web applications.
for now I 'll be taking the risk of replication data inorder to have performance.
*I prefered to have a normalized and a clean data model where I can reuse the data logic without dublicating it.Pratically I could not find a way out for that in the physical design.
**Faster than triggers --> I might use threads.(which might mean more trouble but...)
|||You are mistaken. They work in standard edition (Even SQL Express/MSDE), however, you must name the view directly. In enterprise edition if you have an indexed view, and a select/update doesn't mention the view, but it would benefit from it, it will automatically use it.
However, you are correct, there are some limitations, one of which is no LEFT JOINS, and no UNION. Just do them later (In another view that bases itself off the indexed view and left joins/unions with other data).
The nice part is, you can build it, test it, and see if it'll fit your needs in a matter of minutes.
How to calculate category value - perhaps use subqueries?
d
value based on several conditions regarding another field.
i.e. having a table PayCodes (employeeID, paycode)
employeeID paycode
--
1 01
1 02
1 02S
1 03S
1 71
2 01
2 02S
2 71
3 02
3 03H
4 01
4 02
I need to create a view that will output employeeID and overtimeType where
overtimeType = 1 if an employeeID has 02S or 02H
overtimeType = 2 if an employeeID has 03S or 03H
overtimeType = 3 if an employeeID has (02S or 02H) and (03S or 03H)
overtimeType = 0 if an employeeID has none of 02S, 02H, 03S, 03H
So given the above table, the view should return:
employeeID overtimeType
--
1 3
2 1
3 2
4 0
Any ideas?
Thanks in advance!!
Hellman.On Tue, 4 Oct 2005 10:59:03 -0700, Hellman wrote:
(snip)
Hi Hellman,
I just posted a reply to your question in the .mseq group.
Please post your questions to one group only. And if you really feel
that a question fits two groups, use the crossposting ability of your
software to post one message to both groups at once, so that others will
see if there's already a reply in the other group, and we have all
reactions in one thread.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)
How to calculate category value - perhaps use subqueries?
value based on several conditions regarding another field.
i.e. having a table PayCodes (employeeID, paycode)
employeeID paycode
1 01
1 02
1 02S
1 03S
1 71
2 01
2 02S
2 71
3 02
3 03H
4 01
4 02
I need to create a view that will output employeeID and overtimeType where
overtimeType = 1 if an employeeID has 02S or 02H
overtimeType = 2 if an employeeID has 03S or 03H
overtimeType = 3 if an employeeID has (02S or 02H) and (03S or 03H)
overtimeType = 0 if an employeeID has none of 02S, 02H, 03S, 03H
So given the above table, the view should return:
employeeID overtimeType
1 3
2 1
3 2
4 0
Any ideas?
Thanks in advance!!
Hellman.
On Tue, 4 Oct 2005 10:36:02 -0700, Hellman wrote:
>Hello all - I am trying to create a view that will calculate a category field
>value based on several conditions regarding another field.
>i.e. having a table PayCodes (employeeID, paycode)
>employeeID paycode
>--
> 1 01
> 1 02
> 1 02S
> 1 03S
> 1 71
> 2 01
> 2 02S
> 2 71
> 3 02
> 3 03H
> 4 01
> 4 02
>I need to create a view that will output employeeID and overtimeType where
> overtimeType = 1 if an employeeID has 02S or 02H
> overtimeType = 2 if an employeeID has 03S or 03H
> overtimeType = 3 if an employeeID has (02S or 02H) and (03S or 03H)
> overtimeType = 0 if an employeeID has none of 02S, 02H, 03S, 03H
>So given the above table, the view should return:
>employeeID overtimeType
>--
> 1 3
> 2 1
> 3 2
> 4 0
>Any ideas?
>Thanks in advance!!
>Hellman.
>
Hi Hellman,
I believe that the following will work:
SELECT employeeID,
SUM(overtimeType) AS overtimeType
FROM (SELECT DISTINCT
employeeID,
CASE
WHEN paycode LIKE '02[HS]' THEN 1
WHEN paycode LIKE '03[HS]' THEN 2
ELSE 0
END AS overtimeType
FROM PayCodes) AS d
GROUP BY employeeID
(untested - see www.aspfaq.com/5006 if you prefer tested results)
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)
how to calculate bandwidth requirement for client-server applicati
Front end is developed in VB
Back end database is using SQL 2000
The project is being developed using Classes.
The application will be accessed by 1,000+ users using a Virtual Private
Network to connect to the SQL database.
How to I calculate the Bandwidth requirement per user per transaction for my
application?
Please advise on any tools that can be used to Find out the above
Information? Thanks.
kburito wrote:
> I am writing a client-server desktop application.
> Front end is developed in VB
> Back end database is using SQL 2000
> The project is being developed using Classes.
> The application will be accessed by 1,000+ users using a Virtual
> Private Network to connect to the SQL database.
> How to I calculate the Bandwidth requirement per user per transaction
> for my application?
> Please advise on any tools that can be used to Find out the above
> Information? Thanks.
A call to the database should be pretty small. Use stored procedures (no
server API cursors and the like), make sure result sets are as small as
possible. That means, don't include extraneous columns or unnecessary
rows in the result sets. Also, eliminate all Order By operations on
queries if the application is more than capable of sorting any needed
data itself.
Use read-only, forward-only result sets exclusively and
update/insert/delete data from other stored procedures.
Always fetch all result set data immediately and then process after
everything is fetched.
Never use Parameters.Refresh() type methods from ADO as they require a
lot of interrogation of the database and are slow. Create your
parameters programmatically.
You can use the Show Client Statistics option in Query Analyzer to see
client data for a particular call.
David Gugick
Imceda Software
www.imceda.com
How to calculate AVG. Disk Queue Length
Hi All,
we have collected perfmon data for a specific lun. Here is the background of the lun. The lun is Raid10 with 4 physical disks. We have problems interpreting the data. In the perfmon counter screen we have a max of 435 and average of 0.512. Can somebody tell us what is that we are missing? Any help is greatly appreciated.
Thanks,
Venkat.
In theory the 'magic' number for avg. disk queue length is 2 per spindle.
There are however counters which are more interesting like avg. disk sec/read and avg.disk sec/write which should be below 20ms for data disks and below 3ms for log disks.
WesleyB
Visit my SQL Server weblog @. http://dis4ea.blogspot.com
|||thanks for answering. Remember, when you look at the PerfmonCounters, you always have them in units as 1.00 or 100. how do you correlate those units with the actual readings you are seeing?
|||The value is always in the unit the counter mentions. The graphs can have a different scale and this can be changed through the properties.
WesleyB
Visit my SQL Server weblog @. http://dis4ea.blogspot.com
How to calculate AVG. Disk Queue Length
Hi All,
we have collected perfmon data for a specific lun. Here is the background of the lun. The lun is Raid10 with 4 physical disks. We have problems interpreting the data. In the perfmon counter screen we have a max of 435 and average of 0.512. Can somebody tell us what is that we are missing? Any help is greatly appreciated.
Thanks,
Venkat.
In theory the 'magic' number for avg. disk queue length is 2 per spindle.
There are however counters which are more interesting like avg. disk sec/read and avg.disk sec/write which should be below 20ms for data disks and below 3ms for log disks.
WesleyB
Visit my SQL Server weblog @. http://dis4ea.blogspot.com
|||thanks for answering. Remember, when you look at the PerfmonCounters, you always have them in units as 1.00 or 100. how do you correlate those units with the actual readings you are seeing?
|||The value is always in the unit the counter mentions. The graphs can have a different scale and this can be changed through the properties.
WesleyB
Visit my SQL Server weblog @. http://dis4ea.blogspot.com
How to calculate actual size of column?
I am wondering if any equivalent function available which
gives the actual length of column in bytes as VSIZE in
Oracle?
If not how to calcuate actual column size?
Thanks in advance.
RajIs the DATALENGTH function what you're looking for? For example:
SELECT DATALENGTH(au_lname)
FROM pubs..authors
--
Hope this helps.
Dan Guzman
SQL Server MVP
--
SQL FAQ links (courtesy Neil Pike):
http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
http://www.sqlserverfaq.com
http://www.mssqlserver.com/faq
--
"Raj" <vmresumes@.yahoo.com> wrote in message
news:8fdb01c345a2$356250c0$a401280a@.phx.gbl...
> Hi,
> I am wondering if any equivalent function available which
> gives the actual length of column in bytes as VSIZE in
> Oracle?
> If not how to calcuate actual column size?
> Thanks in advance.
> Raj