The offline domain join capability in Windows Server R2 enables administrators to gather the information needed to join a computer running Windows Server R2 or Windows 7 to a domain and save it to the computer without it requiring access to the domain controllers. When the computer starts for the first time in its final location, it automatically joins to the domain using the saved information, with no interaction and no reboot necessary.
Once this is complete, you copy the file to the computer you want to join to the domain and run Djoin. The first computer, called the provisioning computer, must be running Windows Server R2 or Windows 7, and it must have access to a domain controller. By default, the domain controller must be running Windows Server R2. Optional parameters enable you to specify the name of an OU where you want to create the computer object, and the name of a specific domain controller to use.
To deploy the metadata on the target computer, which must also be running Windows Server R2 or Windows 7, you copy the file Djoin. The system does not have to have access to its eventual domain, or even be connected to a network. Once you have provisioned the computer, you can move it to its final location. The next time you restart the system, it will be joined to the domain you specified and ready to use.
To do the latter, you insert a reference to the metadata file that Djoin. Service Accounts Applications and services require accounts to access network resources, just as users do. These accounts are simple to manage, but they do have draw- backs. First, they are local accounts, which means administrators cannot manage them at the domain level.
Second, these system accounts are typically shared by multiple applications, which can be a security issue. It is possible to configure an application to use a standard domain account.
This enables you to isolate the account security for a particular applica- tion, but it also requires you to manage the account passwords manually. If you change the account password on a regular basis, you must reconfigure the application that uses it, so that it supplies the correct password when logging on to the domain. The managed service account is a new feature in Windows Server R2 that takes the form of a new Active Directory object class.
Because managed service accounts are based on computer objects, they are not subject to Group Policy—based password and account poli- cies as are domain users.
Managed service policies also do not allow interactive logons, so they are an inherently more secure solution for applications and services. Most importantly, managed service accounts eliminate the need for manual credential management.
When you change the password of a managed service account, the system automatically updates all of the applications and services that use it. To use a managed service account for a particular application or service, you must run the Install- ADServiceAccount cmdlet on the computer hosting the application. The BPA has a collection of predefined rules for each role it supports—rules specifying the recommended architectural and configurational parameters for the role.
For example, one AD DS rule recommends that each domain have at least two domain controllers. When you run a BPA scan, the system compares the recommendations to the actual role configura- tion and points out any discrepancies. The scan returns a status indicator for each rule that indicates whether the system is compliant or noncompliant.
There is also a warning status for rules that are compliant at the time of the scan, but that configuration settings might render noncompliant under other operational conditions. After a delay as the analyzer performs the scan, the results appear, as shown in Figure The analyzer then compares its preconfigured rules to the information in the XML file and reports the results.
Although storage space is cheaper and more plentiful than ever before, the increased emphasis on audio and video file types, whether business related or not, has led to a storage consumption rate that in many instances more than equals its growth. There is only one new role service in the File Services role, but there are innovative new features introduced into some of the existing role services.
In an enterprise with multiple sites, increased storage capacity typically leads to increased consumption of bandwidth between sites, and these new features can help administrators manage this bandwidth consumption and improve file access times in the process.
Using the File Classification Infrastructure An enterprise network can easily have millions of files stored on its servers, and admin- istrators are responsible for all of them. However, different types of files have different management requirements.
Enterprise networks typically have a variety of storage tech- nologies to accommodate their different needs. For example, drive arrays using Redun- dant Array of Independent Disks RAID for fault tolerance are excellent solutions for business-critical files, but they are also more expensive to purchase, set up, and maintain.
Storing noncritical files on a medium such as this would be a waste. At the other end of the spectrum, an offline or near-line storage medium, such as magnetic tape or optical disks, can provide inexpensive storage for files that are not needed on a regular basis, or that have been archived or retired.
The big problem for the administrator with a variety of storage options is determining which files should go on which medium, and then making sure that they get there. However, determining which files require a certain treatment and seeing that they receive it can be a major administrative problem. Traditional methods for classifying files include storing them in designated folders, ap- plying special file naming conventions, and, in the case of backups, the long-standing use of the archive bit to indicate files that have changed.
None of these methods are particularly efficient for complex scenarios on a large scale, however, because of the manual maintenance they require or their limited flexibility. Who is going to be responsible for making sure that files are named properly, or moved to the appropriate folders? It would not be practical for IT personnel to monitor the file management practices of every user on the network. Also, if you designate one folder for files containing sensitive data and another for files that are modified often, what do you do with a file that is both sensitive and frequently updated?
Introducing the FCI Components The File Classification Infrastructure FCI introduced in Windows Server R2 is a system that enables administrators to define their own file classifications, independent of directory structures and file names, and configure applications to perform specific actions based on those classifications. FCI consists of four components, as follows: n Classification Properties Attributes created by administrators that identify certain characteristics about files, such as their business value or level of sensitivity n Classification Rules Mechanisms that automatically apply classification properties to certain files based on specific criteria such as file contents n File Management Tasks Scheduled operations that perform specified actions on files with certain classification properties n Storage Reports Management Engine that can generate reports that, among oth- er things, document the distribution of classification properties on file server volume For example, an administrator might create a classification property that indicates whether a file contains personal or confidential information.
Also new is the File Management Tools node, which you use to execute specific actions based on the file classifications you have created. The Storage Report Management node now includes the ability to generate reports based on FCI properties, as well as other, traditional criteria. FCI is designed to be more of a toolkit for storage administrators than an end-to-end solution. FCI provides various types of classification properties, but it is up to the individual administrator to apply them to the particular needs of an enterprise.
File Management Tools provides a basic file expiration function and the ability to execute custom commands against particular file classifications. However, FCI is also designed with an extensible infrastructure so that third-party developers can integrate property-based file selection into their existing products. Creating FCI Classification Properties The first step in implementing FCI is to create the classification properties that you will apply to files with certain characteristics.
Classification properties are simple attributes, consisting only of a name, a property type, and sometimes a list of values. Property types indicate the nature of the classification you want to apply to your files; they do not have to contain the classification criteria themselves. FCI supports seven classification property types, as listed in Table Aggregation refers to the behavior of a classification property type when a rule or other process attempts to assign the same property to a file, but with a different value.
An attempt to assign a second property value to an already-classified file results in an error. You can configure a rule to reevaluate files with these properties, but the rule will simply assign a new value that overwrites the old one, without considering the existing value of the property. When there is a value conflict, such as if one rule assigns a file High Security and another rule assigns it Low Security, the High Security value takes precedence, as shown on the left side of Figure , enabling the property to err on the side of caution and use the greatest possible security measures.
However, if you are seeking to categorize files based on subject, the Multiple Choice List property would probably be prefer- able, because it enables you to assign multiple properties to a single file, as shown on the right side of the figure. High Security 3. After specifying a name for the property, and optionally a description, you select a Property Type, and the controls change depending on the type you have chosen.
The types that do not support a selection of possible values Date-time, Num- ber, and String require no additional configuration. The other types enable you to add the possible values that your classification rules can assign to files, based on criteria you select. Creating FCI Classification Rules Once you have created your classification properties, you can assign them to your files by cre- ating classification rules.
On the Rule Settings tab, shown in Figure , you supply a name for the rule, and optionally a description, and then click Add to define the scope; that is, specify the volumes or folders containing the files to which you want to apply properties. NOTE These classification mechanisms take the form of plug-in modules, of which Windows Server R2 includes only two relatively rudimentary examples. Microsoft has designed this part of the FCI to be extensible, so that administrators and third-party developers can use the FCI application programming interface API to produce their own classification plug-ins, as well as scripts and applications that set properties on files.
In the Property Name and Property Value fields, you specify which of your classification properties you want to assign to the files the rule selects, and what value the rule should insert into the property. Clicking Advanced displays the Additional Rule Parameters dialog box, in which you find the following tabs: n Evaluation Type Enables you to specify how the rule should behave when it en- counters a file that already has a value defined for the specified property.
You can elect to overwrite the existing property value or aggregate the values for properties that support aggregation. If you en- crypt files after they have classification properties assigned, they retain those properties and applications can read them, but you cannot modify the properties or assign new ones while the files are in their encrypted state. Once you have created your classification rules, you must execute them to apply proper- ties to your files.
You can click Run Classification With All Rules Now to execute your rules immediately, or you can click Configure Classification Schedule to run them at a later time or at regular intervals.
TIP Administrators new to FCI have a tendency to create large numbers of properties and rules, simply because they can. Be aware that processing rules, and especially those that search for complex regular expressions, can take a lot of time and consume a significant amount of server memory.
Microsoft recommends only applying classifications that your current applications can utilize. Performing File Management Tasks Once you have classified your files, you can use File Server Resource Manager to create file management tasks, which can manipulate the files based on their classification properties.
Here again, the capabilities provided with Windows Server R2 are relatively rudimen- tary, but as with the classification mechanisms, administrators and third-party developers can integrate property-based file processing into their applications. Here, as in the Classification Rule Definitions dialog box, you supply a name, a description, and a scope for the task. On the Action tab, you can select one of the following action types: n File Expiration Enables you to move files matching specified property values to another location n Custom Enables you to execute a program, command, or script on files matching specified property values, using the interface shown in Figure On the Condition tab, you specify the property values that files must possess for the file management task to process them, using the Property Condition dialog box, as shown in Fig- ure The Schedule tab enables you to configure the task to execute at specified intervals, and the Notification and Report tabs specify the types of information administrators receive about the task processing.
Although the File Expiration action type enables administrators to migrate files based on property values, it is the Custom action that provides true power for the savvy administrator. Using the Executable and Arguments fields, administrators can run a command, program, or script on the files having the specified properties. Some of the possible scenarios for custom- ized tasks are as follows: n Modify the permissions for the selected files using Lcacls.
Using BranchCache Branch office technologies were a major priority for the Windows Server R2 and Windows 7 development teams, and BranchCache is one of the results of that concentration. On an enterprise network, a branch office can consist of anything from a handful of work- stations with a virtual private network VPN connection to a fully equipped network with its own servers and IT staff.
In most cases, however, branch offices nearly always require some network communication with the home office, and possibly with other branches as well. The wide area network WAN connections between remote sites are by nature slower and more expensive than local area network LAN connections, and the primary functions of Branch- Cache are to reduce the amount of WAN bandwidth consumed by branch office file sharing traffic and improve access times for branch office users accessing files on servers at remote locations.
As the name implies, BranchCache is file caching software. Caching is a technique by which a system copies frequently used data to an alternative storage medium, so that it can satisfy future requests for the same data more quickly or less expensively. BranchCache works by caching files from remote servers on the local drive of a branch office computer so that other computers in the branch office can access those same files locally, instead of having to send repeated requests to the remote server.
BranchCache has two operational modes, as follows: n Distributed Cache Mode Up to 50 branch office computers cache files requested from remote servers on their local drives, and then make those cached files available to other computers on the local network, on a peer-to-peer basis. The primary difference between these two modes is that Hosted Cache Mode requires the branch office to have a server running Windows Server R2, whereas Distributed Cache Mode requires only Windows 7 workstations.
The advantage of Hosted Cache Mode is that the server, and therefore the cache, is always available to all of the workstations in the branch office. Workstations in Distributed Cache Mode can only share cached data with computers on the local network, and if a workstation is hibernating or turned off, its cache is obviously unavailable.
This is because caching writes is a much more complicated operation than caching reads, due to possible existence of conflicts between multiple versions of the same file. The BranchCache communication between the clients and the remote server proceeds as follows: 1. The only difference from a standard request is that the client includes an identifier in the message, indicating that it supports BranchCache. When the BranchCache-enabled remote server receives the request and recognizes that the client also supports BranchCache, it replies, not with the requested file, but with content metadata in the form of a hash describing the requested file, as shown in the following graphic.
The metadata is substantially smaller than the requested file itself, so the amount of WAN bandwidth utilized so far is relatively small. Step 1. Reply with Metadata Office Client 3.
On a Distributed Cache Mode installation, the client sends this message as a multicast transmission to the other BranchCache clients on the network, using the BranchCache discovery protocol. On a Hosted Cache Mode installation, the client sends the message to the local server that hosts the cache, using the BranchCache retrieval protocol.
In Distributed Cache Mode, the client fails to receive a reply from another client on the network. In Hosted Cache Mode, the client receives a reply from the local server indi- cating that the requested data is not in the cache, as shown in the following graphic. Multicast with Metadata Step 4. Forwarded Metadata Step 4. Negative Reply Branch Office Server 5.
The client retransmits its original file request to the remote server. This time, however, the client omits the BranchCache identifier from the request message. The remote server, on receiving a standard non-BranchCache request, replies by transmitting the requested file, as shown in the following graphic. Step 5. Reply with File Office Client 7. The client receives the requested file and, on a Distributed Cache Mode installation, stores the file in its local cache.
On a Hosted Cache Mode installation, the client sends a message to its local caching server using the BranchCache hosted cache proto- col, advertising the availability of its newly downloaded data. Distributed Cache Mode Step 7. Client Advertises File Server Retrieves and Caches File Branch Office Server When another client subsequently requests the same data from the remote server, the communication process is exactly the same up until step 4. In this case, the client receives a reply from another computer either client or server, depending on the mode indicating that the requested data is present in its cache.
The client then uses the BranchCache retrieval protocol to download the data from the caching computer. For this and subsequent requests for that particular file, the only WAN traffic required is the exchange of request messages and content metadata, both of which are much smaller than the actual data file. BranchCache is not installed by default on Windows Server R2; you must install one or both of the BranchCache modules supplied with the operating system, and then create Group Policy settings to configure them.
To enable BranchCache for all three protocols, you must install both of the following two modules using Server Manager.
This setting enables the file server to transmit content metadata to qualified BranchCache clients instead of the actual files they request. When you enable Hash Publication for BranchCache, as shown in Figure , you can elect to allow hash publication for all file shares on the computer, or only for the file shares on which you explicitly enable BranchCache support. Computers running Windows 7 have the BranchCache client installed by default.
Enabling this setting without either one of the mode settings configures the client to cache server data on its local drive only, without accessing caches on other computers. The default setting is 80 ms. When you decrease the value, the client caches more files; increasing the value causes it to cache fewer files.
The default value is 5 percent. To facilitate this communication, administrators must configure any firewalls running on the clients to admit incoming traffic on the ports these two protocols use, which are Transmission Control Protocol TCP port 80 and User Datagram Protocol UDP port , respectively. You must then provide the server with a certificate issued by a certification authority CA that the clients on the branch office network trust.
This can be an internal CA running on the network or a commercial CA run by a third party. Note, however, that client configuration values you set using Group Policy take precedence over those you set with Netsh. However, to do so, the namespace must be hosted on a server running Windows Server R2 or Windows Server If you enable access-based enu- meration on a DFS namespace and on the target shares that the namespace links to using the Share and Storage Management console , the shared folders are completely hidden from unauthorized users.
Prior to the R2 release, you could only do this by manually changing the permissions on the replicated folder. Note, however, that read-only folders impose an additional perfor- mance burden on the servers hosting them, because DFS Replication must intercept every Create and Open function call to determine if the requested destination is in a read-only folder.
Since then, as anticipated, the IIS development team has been working on a variety of enhancements and extensions that build on that new architecture. Although based on the same basic structure as IIS 7. This chapter lists the new features in IIS 7. Installing IIS 7. That dependency is still there, however. The Microsoft Web Platform is an integrated set of servers and tools that enable you to deploy complete Web solutions, includ- ing applications and ancillary servers, with a single procedure.
The Microsoft Web Platform Installer is a tool that enables you to select, download, install, and configure the features you want to deploy on your Web server. The Web Platform Installer file you download is a stub, a tiny file that enables you to select the modules you want to install and then to download them, using the interface shown in Figure The installer provides a selection of collaboration, e-commerce, portal, and blog applications, and enforces the dependencies between the various elements.
During the installation process, Web Platform Installer prompts you for information needed by your selected applications, such as what subdirectory to install them into, what passwords to use, and so on.
When the process is complete, you have a fully functional Web site, complete with IIS and applications and ready to use. Selecting a server, site, or application and clicking Export Application launches a wizard in which you can select the elements that you want to export, as shown in Figure The wiz- ard then creates a package in the form of a Zip file, which contains the original content plus configuration settings in Extensible Markup Language XML format.
The tool also includes a Remote Agent Service, which administrators can use to synchronize Web servers in real time over a network connection. This enables you to replicate sites and servers on a regular basis so that you can create Web farms for load balancing and fault toler- ance purposes. After installing the role service, you create an authoring rule that specifies what content you want to be able to publish and which users can publish it, using the interface shown in Figure Then, using a feature called the WebDAV redirector on the client computer, you map a drive to your Web site.
Copying files to that drive automatically publishes them on the Web site. However, Microsoft is releasing an up- dated version of the service, to synchronize its feature set with the version included with Windows Server R2.
It was created at a time when security was not as great a con- cern as it is now, and as a result, it has no built-in data protection of any kind.
Clients transmit passwords in clear text, and transfer files to and from servers in unencrypted form. Windows Server R2, however, has an FTP server implementation that is enhanced with better security measures and other new features. It requires you to install the old IIS 6. They have also included an additional role service, FTP Extensibility, which enables developers to use their own managed code to create customized authentication, authorization, logging, and home directory providers.
However, Microsoft is releasing an updated version of the service to synchronize its feature set with the version included with Windows Server R2. Hosting Applications with IIS 7. Server Core is a stripped-down version of the Windows Server operating system that eliminates many roles and features and most of the graphical interface.
Because ASP. NET is one of the most commonly used development environments for Web applications today, this was a major shortcoming.
However, Windows Server R2 provides support for. NET Framework 2. NET applications. The ASP. Microsoft has also incorporated this capability into Windows Server Service Pack 2.
NET 4. This feature enables an administrator to configure an application pool to start up automati- cally, while temporarily not processing HTTP requests.
This allows applications requiring extensive initialization to finish loading the data they need or to complete other processes before they begin accepting HTTP requests. The default value is 4, but in IIS 7. When IIS 7. You can also configure the property to terminate the FastCGI process when an error occurs.
You can also use it for anonymous authentication in place of the IUSR account. In IIS 7. Managing IIS 7. Windows Server R2 includes a number of IIS configuration tools that were previously available only as separate downloads, and Microsoft has enhanced many of the existing tools.
Once you have access to the IIS Windows PowerShell snap-in, you can display all of the cmdlets it contains by using the following command: Get-Command —pssnapin WebAdministration The snap-in uses three different types of cmdlets, as follows: n PowerShell provider cmdlets n Low-level configuration cmdlets n Task-oriented cmdlets These cmdlet types correspond to three different methods of managing IIS from the Windows PowerShell prompt, as described in the following sections.
By pip- ing the results of the Get-Item cmdlet to the Select-Object cmdlet, you can display all of the properties of a selected site, as shown in Figure Any module that includes a provider hierarchy must support them. Once within the IIS hierarchy, you can use low-level configuration cmdlets to manage specific IIS elements without having to type extended path names. This new architecture, carried over into the IIS 7.
This extensibility complicates the process of developing a Windows PowerShell management strategy, however. Cmdlets might have static parameters that enable them to manage specific properties of an element, but if a third-party developer creates an IIS exten- sion that adds new properties to that element, the existing cmdlets cannot manage them. Therefore, the IIS Windows PowerShell snap-in includes low-level configuration cmdlets that you can use to view and manage all of the hundreds of IIS configuration settings, includ- ing custom settings added by IIS extensions.
One set of task- oriented cmdlets, concerned with managing IIS sites, is as follows: n Get-Website n New-Website n Remove-Website n Start-Website n Stop-Website Unlike the low-level cmdlets, the task-oriented cmdlets do not rely on the IIS namespace although they can utilize it , and they use static parameters to configure specific properties.
Once you have created the site, you can even use the Windows PowerShell interface to create new content. For example, the ASP. Also accessible through the console are the features described in the following sections. Using Configuration Editor Configuration Editor is a graphical tool that enables administrators to view and manage any setting in any of the IIS configuration files. Because the tool is based on the IIS configuration schema, it can even manage custom settings without any interface modifications.
In addition, once you have performed your modifications, the Configuration Editor can generate a script that duplicates those modifications for execution on other servers. You can configure a multitude of settings for the new site, after which it appears as part of the collection. Finally, back on the Configuration Editor page, clicking Generate Script in the Actions pane displays script code that will create a new site identical to the one you just added, using man- aged code C , JavaScript, or the Appcmd.
From this window, you can copy the code to a text file to save for later use. Request Filtering is essentially a graphical interface that inserts code into Web. Requests that the filtering mechanism rejects are logged with error codes that indicate the reason for the rejection. The Request Filtering page, shown in Figure , contains seven tabs that enable you to create the following types of filters: n File Name Extensions Filters incoming HTTP requests based on the extension of the file requested.
For example, this enables you to filter out requests for files in the bin folder with- out rejecting requests for files in the binary folder.
This capability is particularly useful in preventing SQL injection attacks, in which query strings contain escape characters or other damaging code. Using Configuration Tracing Starting in version 7. In Windows Server R2, configuration tracing is disabled by default. Clicking Scan This Role initiates the process by which the analyzer gathers information about IIS and compares it with a set of predefined rules.
IIS conditions that differ substantially from the rules are listed in the analyzer as noncompliant results. As a result, there is a great deal to learn about it, and there are a great many extensions and add-ons available.
Both of these sites provide the latest IIS news, learning tools, community participation, and software downloads. In late , sales of mobile com- puters exceeded those of desktop computers for the first time. Many of these mobile users require access to the internal resources of their corporate networks to perform their required tasks, and Microsoft provides a number of mechanisms that enable them to do so. Virtual private networking can provide remote clients with complete access to the company intranet, and Network Policy Server helps administrators keep remote connec- tions safe and secure.
In Windows Server R2, Microsoft has enhanced these services with new features, and also has introduced a new remote connectivity service for R2 servers and Windows 7 clients called DirectAccess.
Introducing DirectAccess A virtual private network VPN connection is a secure pipeline between a remote client computer and a network server, using the Internet as a conduit. When the client estab- lishes the VPN connection with the server, it uses a process called tunneling to encapsu- late the intranet traffic within standard Internet packets. With VPNs, the user on the client computer must explicitly launch the connection to the server, using a process similar to establishing a dial-up networking connection.
Depending on the server policies, this can take several minutes. If the client loses its Internet connection for any reason, such as wandering out of a wireless hot spot, the user must manually reestablish the VPN connection. DirectAccess, by contrast, uses connections that the client computer establishes auto- matically and that are always on. Users can access intranet resources without any deliberate interaction, just as though they were connected directly to the corporate network.
As soon as the client computer connects to the Internet, it begins the DirectAccess connection process, which is completely invisible to the user. By the time the user is logged on and ready to work, the client can have downloaded e-mail and mapped drives to file server shares on the intra- net. DirectAccess not only simplifies the connection process for the user, it also benefits the network administrator.
DirectAccess connections are bidirectional, and Windows 7 clients establish their computer connections before the user even logs on to the system.
This enables administrators to gain access to the client computer at any time so they can apply Group Policy settings, deploy patches, or perform other upgrade and maintenance tasks. Some of the other benefits of DirectAccess are as follows: n Intranet detection The DirectAccess client determines whether the computer is connecting directly to the corporate network or accessing the network remotely and behaves accordingly.
Users can authenticate with smart cards or biometric devices. In Di- rectAccess, clients send intranet traffic through the tunnel, while the Internet traffic bypasses the tunnel and goes directly to the Internet.
This is called split-tunnel routing. The latter feature is why DirectAccess relies so heavily on IPv6 for its connectivity. Client computers can use the same IPv6 addresses wherever they happen to be in the world.
Unfortunately, many networks still use IPv4, including the Internet. Therefore, DirectAccess includes support for a number of IPv6 transition technologies, which are essentially protocols that enable computers to transmit IPv6 packets over an IPv4 net- work. DirectAccess uses IPsec to authenticate client computers and users, and to ensure that the private intranet data that clients and servers transmit over the Internet remains private. Remove From My Forums.
Answered by:. Archived Forums. Remote Desktop Services Terminal Services. Sign in to vote. Tuesday, December 14, AM. Hello, Most solution work fine on terminal services. Tuesday, December 14, PM. Well, we are using the pdfcreator since last year. No problems. But also cute pdf was working well. Start on. Show related SlideShares at end. WordPress Shortcode.
Share Email. Top clipped slide. Download Now Download Download to read offline. Windows Server R2 Overview May. Steven Wilder Follow. Windows Server Technology Specialist.
Windows Server R2 Overview. Windows Server Active Directory. Install Windows Server Step-by-Step. Presentation about servers. Active Directory. Windows Server Management. What is active directory. Introduction to Active Directory. Related Books Free with a 30 day trial from Scribd. Related Audiobooks Free with a 30 day trial from Scribd. Elizabeth Howell.
Windows Server R2 Overview 1. Windows Server Overview steven. Boot from. True multiple monitor support Use up to 10 monitors of any size or layout with RemoteApp and Desktops Applications behave like users expect — e.
0コメント